# Datasets:qasper

Languages: en-US
Multilinguality: monolingual
Size Categories: 10K<n<100K
Language Creators: expert-generated
Annotations Creators: expert-generated
Source Datasets: extended|s2orc
Dataset Preview
id (string)title (string)abstract (string)full_text (json)qas (json)
1909.00694
Minimally Supervised Learning of Affective Events Using Discourse Relations
Recognizing affective events that trigger positive or negative sentiment has a wide range of natural language processing applications but remains a challenging problem mainly because the polarity of an event is not necessarily predictable from its constituent words. In this paper, we propose to propagate affective polarity using discourse relations. Our method is simple and only requires a very small seed lexicon and a large raw corpus. Our experiments using Japanese data show that our method learns affective events effectively without manually labeled data. It also improves supervised learning results when labeled data are small.
{ "section_name": [ "Introduction", "Related Work", "Proposed Method", "Proposed Method ::: Polarity Function", "Proposed Method ::: Discourse Relation-Based Event Pairs", "Proposed Method ::: Discourse Relation-Based Event Pairs ::: AL (Automatically Labeled Pairs)", "Proposed Method ::: Discourse Relation-Based Event Pairs ::: CA (Cause Pairs)", "Proposed Method ::: Discourse Relation-Based Event Pairs ::: CO (Concession Pairs)", "Proposed Method ::: Loss Functions", "Experiments", "Experiments ::: Dataset", "Experiments ::: Dataset ::: AL, CA, and CO", "Experiments ::: Dataset ::: ACP (ACP Corpus)", "Experiments ::: Model Configurations", "Experiments ::: Results and Discussion", "Conclusion", "Acknowledgments", "Appendices ::: Seed Lexicon ::: Positive Words", "Appendices ::: Seed Lexicon ::: Negative Words", "Appendices ::: Settings of Encoder ::: BiGRU", "Appendices ::: Settings of Encoder ::: BERT" ], "paragraphs": [ [ "Affective events BIBREF0 are events that typically affect people in positive or negative ways. For example, getting money and playing sports are usually positive to the experiencers; catching cold and losing one's wallet are negative. Understanding affective events is important to various natural language processing (NLP) applications such as dialogue systems BIBREF1, question-answering systems BIBREF2, and humor recognition BIBREF3. In this paper, we work on recognizing the polarity of an affective event that is represented by a score ranging from $-1$ (negative) to 1 (positive).", "Learning affective events is challenging because, as the examples above suggest, the polarity of an event is not necessarily predictable from its constituent words. Combined with the unbounded combinatorial nature of language, the non-compositionality of affective polarity entails the need for large amounts of world knowledge, which can hardly be learned from small annotated data.", "In this paper, we propose a simple and effective method for learning affective events that only requires a very small seed lexicon and a large raw corpus. As illustrated in Figure FIGREF1, our key idea is that we can exploit discourse relations BIBREF4 to efficiently propagate polarity from seed predicates that directly report one's emotions (e.g., “to be glad” is positive). Suppose that events $x_1$ are $x_2$ are in the discourse relation of Cause (i.e., $x_1$ causes $x_2$). If the seed lexicon suggests $x_2$ is positive, $x_1$ is also likely to be positive because it triggers the positive emotion. The fact that $x_2$ is known to be negative indicates the negative polarity of $x_1$. Similarly, if $x_1$ and $x_2$ are in the discourse relation of Concession (i.e., $x_2$ in spite of $x_1$), the reverse of $x_2$'s polarity can be propagated to $x_1$. Even if $x_2$'s polarity is not known in advance, we can exploit the tendency of $x_1$ and $x_2$ to be of the same polarity (for Cause) or of the reverse polarity (for Concession) although the heuristic is not exempt from counterexamples. We transform this idea into objective functions and train neural network models that predict the polarity of a given event.", "We trained the models using a Japanese web corpus. Given the minimum amount of supervision, they performed well. In addition, the combination of annotated and unannotated data yielded a gain over a purely supervised baseline when labeled data were small." ], [ "Learning affective events is closely related to sentiment analysis. Whereas sentiment analysis usually focuses on the polarity of what are described (e.g., movies), we work on how people are typically affected by events. In sentiment analysis, much attention has been paid to compositionality. Word-level polarity BIBREF5, BIBREF6, BIBREF7 and the roles of negation and intensification BIBREF8, BIBREF6, BIBREF9 are among the most important topics. In contrast, we are more interested in recognizing the sentiment polarity of an event that pertains to commonsense knowledge (e.g., getting money and catching cold).", "Label propagation from seed instances is a common approach to inducing sentiment polarities. While BIBREF5 and BIBREF10 worked on word- and phrase-level polarities, BIBREF0 dealt with event-level polarities. BIBREF5 and BIBREF10 linked instances using co-occurrence information and/or phrase-level coordinations (e.g., “$A$ and $B$” and “$A$ but $B$”). We shift our scope to event pairs that are more complex than phrase pairs, and consequently exploit discourse connectives as event-level counterparts of phrase-level conjunctions.", "BIBREF0 constructed a network of events using word embedding-derived similarities. Compared with this method, our discourse relation-based linking of events is much simpler and more intuitive.", "Some previous studies made use of document structure to understand the sentiment. BIBREF11 proposed a sentiment-specific pre-training strategy using unlabeled dialog data (tweet-reply pairs). BIBREF12 proposed a method of building a polarity-tagged corpus (ACP Corpus). They automatically gathered sentences that had positive or negative opinions utilizing HTML layout structures in addition to linguistic patterns. Our method depends only on raw texts and thus has wider applicability.", "" ], [ "" ], [ "", "Our goal is to learn the polarity function $p(x)$, which predicts the sentiment polarity score of an event $x$. We approximate $p(x)$ by a neural network with the following form:", "${\\rm Encoder}$ outputs a vector representation of the event $x$. ${\\rm Linear}$ is a fully-connected layer and transforms the representation into a scalar. ${\\rm tanh}$ is the hyperbolic tangent and transforms the scalar into a score ranging from $-1$ to 1. In Section SECREF21, we consider two specific implementations of ${\\rm Encoder}$.", "" ], [ "Our method requires a very small seed lexicon and a large raw corpus. We assume that we can automatically extract discourse-tagged event pairs, $(x_{i1}, x_{i2})$ ($i=1, \\cdots$) from the raw corpus. We refer to $x_{i1}$ and $x_{i2}$ as former and latter events, respectively. As shown in Figure FIGREF1, we limit our scope to two discourse relations: Cause and Concession.", "The seed lexicon consists of positive and negative predicates. If the predicate of an extracted event is in the seed lexicon and does not involve complex phenomena like negation, we assign the corresponding polarity score ($+1$ for positive events and $-1$ for negative events) to the event. We expect the model to automatically learn complex phenomena through label propagation. Based on the availability of scores and the types of discourse relations, we classify the extracted event pairs into the following three types.", "" ], [ "The seed lexicon matches (1) the latter event but (2) not the former event, and (3) their discourse relation type is Cause or Concession. If the discourse relation type is Cause, the former event is given the same score as the latter. Likewise, if the discourse relation type is Concession, the former event is given the opposite of the latter's score. They are used as reference scores during training.", "" ], [ "The seed lexicon matches neither the former nor the latter event, and their discourse relation type is Cause. We assume the two events have the same polarities.", "" ], [ "The seed lexicon matches neither the former nor the latter event, and their discourse relation type is Concession. We assume the two events have the reversed polarities.", "" ], [ "Using AL, CA, and CO data, we optimize the parameters of the polarity function $p(x)$. We define a loss function for each of the three types of event pairs and sum up the multiple loss functions.", "We use mean squared error to construct loss functions. For the AL data, the loss function is defined as:", "where $x_{i1}$ and $x_{i2}$ are the $i$-th pair of the AL data. $r_{i1}$ and $r_{i2}$ are the automatically-assigned scores of $x_{i1}$ and $x_{i2}$, respectively. $N_{\\rm AL}$ is the total number of AL pairs, and $\\lambda _{\\rm AL}$ is a hyperparameter.", "For the CA data, the loss function is defined as:", "$y_{i1}$ and $y_{i2}$ are the $i$-th pair of the CA pairs. $N_{\\rm CA}$ is the total number of CA pairs. $\\lambda _{\\rm CA}$ and $\\mu$ are hyperparameters. The first term makes the scores of the two events closer while the second term prevents the scores from shrinking to zero.", "The loss function for the CO data is defined analogously:", "The difference is that the first term makes the scores of the two events distant from each other.", "" ], [ "" ], [ "" ], [ "As a raw corpus, we used a Japanese web corpus that was compiled through the procedures proposed by BIBREF13. To extract event pairs tagged with discourse relations, we used the Japanese dependency parser KNP and in-house postprocessing scripts BIBREF14. KNP used hand-written rules to segment each sentence into what we conventionally called clauses (mostly consecutive text chunks), each of which contained one main predicate. KNP also identified the discourse relations of event pairs if explicit discourse connectives BIBREF4 such as “ので” (because) and “のに” (in spite of) were present. We treated Cause/Reason (原因・理由) and Condition (条件) in the original tagset BIBREF15 as Cause and Concession (逆接) as Concession, respectively. Here is an example of event pair extraction.", ". 重大な失敗を犯したので、仕事をクビになった。", "Because [I] made a serious mistake, [I] got fired.", "From this sentence, we extracted the event pair of “重大な失敗を犯す” ([I] make a serious mistake) and “仕事をクビになる” ([I] get fired), and tagged it with Cause.", "We constructed our seed lexicon consisting of 15 positive words and 15 negative words, as shown in Section SECREF27. From the corpus of about 100 million sentences, we obtained 1.4 millions event pairs for AL, 41 millions for CA, and 6 millions for CO. We randomly selected subsets of AL event pairs such that positive and negative latter events were equal in size. We also sampled event pairs for each of CA and CO such that it was five times larger than AL. The results are shown in Table TABREF16." ], [ "We used the latest version of the ACP Corpus BIBREF12 for evaluation. It was used for (semi-)supervised training as well. Extracted from Japanese websites using HTML layouts and linguistic patterns, the dataset covered various genres. For example, the following two sentences were labeled positive and negative, respectively:", ". 作業が楽だ。", "The work is easy.", ". 駐車場がない。", "There is no parking lot.", "Although the ACP corpus was originally constructed in the context of sentiment analysis, we found that it could roughly be regarded as a collection of affective events. We parsed each sentence and extracted the last clause in it. The train/dev/test split of the data is shown in Table TABREF19.", "The objective function for supervised training is:", "", "where $v_i$ is the $i$-th event, $R_i$ is the reference score of $v_i$, and $N_{\\rm ACP}$ is the number of the events of the ACP Corpus.", "To optimize the hyperparameters, we used the dev set of the ACP Corpus. For the evaluation, we used the test set of the ACP Corpus. The model output was classified as positive if $p(x) > 0$ and negative if $p(x) \\le 0$.", "" ], [ "As for ${\\rm Encoder}$, we compared two types of neural networks: BiGRU and BERT. GRU BIBREF16 is a recurrent neural network sequence encoder. BiGRU reads an input sequence forward and backward and the output is the concatenation of the final forward and backward hidden states.", "BERT BIBREF17 is a pre-trained multi-layer bidirectional Transformer BIBREF18 encoder. Its output is the final hidden state corresponding to the special classification tag ([CLS]). For the details of ${\\rm Encoder}$, see Sections SECREF30.", "We trained the model with the following four combinations of the datasets: AL, AL+CA+CO (two proposed models), ACP (supervised), and ACP+AL+CA+CO (semi-supervised). The corresponding objective functions were: $\\mathcal {L}_{\\rm AL}$, $\\mathcal {L}_{\\rm AL} + \\mathcal {L}_{\\rm CA} + \\mathcal {L}_{\\rm CO}$, $\\mathcal {L}_{\\rm ACP}$, and $\\mathcal {L}_{\\rm ACP} + \\mathcal {L}_{\\rm AL} + \\mathcal {L}_{\\rm CA} + \\mathcal {L}_{\\rm CO}$.", "" ], [ "", "Table TABREF23 shows accuracy. As the Random baseline suggests, positive and negative labels were distributed evenly. The Random+Seed baseline made use of the seed lexicon and output the corresponding label (or the reverse of it for negation) if the event's predicate is in the seed lexicon. We can see that the seed lexicon itself had practically no impact on prediction.", "The models in the top block performed considerably better than the random baselines. The performance gaps with their (semi-)supervised counterparts, shown in the middle block, were less than 7%. This demonstrates the effectiveness of discourse relation-based label propagation.", "Comparing the model variants, we obtained the highest score with the BiGRU encoder trained with the AL+CA+CO dataset. BERT was competitive but its performance went down if CA and CO were used in addition to AL. We conjecture that BERT was more sensitive to noises found more frequently in CA and CO.", "Contrary to our expectations, supervised models (ACP) outperformed semi-supervised models (ACP+AL+CA+CO). This suggests that the training set of 0.6 million events is sufficiently large for training the models. For comparison, we trained the models with a subset (6,000 events) of the ACP dataset. As the results shown in Table TABREF24 demonstrate, our method is effective when labeled data are small.", "The result of hyperparameter optimization for the BiGRU encoder was as follows:", "As the CA and CO pairs were equal in size (Table TABREF16), $\\lambda _{\\rm CA}$ and $\\lambda _{\\rm CO}$ were comparable values. $\\lambda _{\\rm CA}$ was about one-third of $\\lambda _{\\rm CO}$, and this indicated that the CA pairs were noisier than the CO pairs. A major type of CA pairs that violates our assumption was in the form of “$\\textit {problem}_{\\text{negative}}$ causes $\\textit {solution}_{\\text{positive}}$”:", ". (悪いところがある, よくなるように努力する)", "(there is a bad point, [I] try to improve [it])", "The polarities of the two events were reversed in spite of the Cause relation, and this lowered the value of $\\lambda _{\\rm CA}$.", "Some examples of model outputs are shown in Table TABREF26. The first two examples suggest that our model successfully learned negation without explicit supervision. Similarly, the next two examples differ only in voice but the model correctly recognized that they had opposite polarities. The last two examples share the predicate “落とす\" (drop) and only the objects are different. The second event “肩を落とす\" (lit. drop one's shoulders) is an idiom that expresses a disappointed feeling. The examples demonstrate that our model correctly learned non-compositional expressions.", "" ], [ "In this paper, we proposed to use discourse relations to effectively propagate polarities of affective events from seeds. Experiments show that, even with a minimal amount of supervision, the proposed method performed well.", "Although event pairs linked by discourse analysis are shown to be useful, they nevertheless contain noises. Adding linguistically-motivated filtering rules would help improve the performance." ], [ "We thank Nobuhiro Kaji for providing the ACP Corpus and Hirokazu Kiyomaru and Yudai Kishimoto for their help in extracting event pairs. This work was partially supported by Yahoo! Japan Corporation." ], [ "喜ぶ (rejoice), 嬉しい (be glad), 楽しい (be pleasant), 幸せ (be happy), 感動 (be impressed), 興奮 (be excited), 懐かしい (feel nostalgic), 好き (like), 尊敬 (respect), 安心 (be relieved), 感心 (admire), 落ち着く (be calm), 満足 (be satisfied), 癒される (be healed), and スッキリ (be refreshed)." ], [ "怒る (get angry), 悲しい (be sad), 寂しい (be lonely), 怖い (be scared), 不安 (feel anxious), 恥ずかしい (be embarrassed), 嫌 (hate), 落ち込む (feel down), 退屈 (be bored), 絶望 (feel hopeless), 辛い (have a hard time), 困る (have trouble), 憂鬱 (be depressed), 心配 (be worried), and 情けない (be sorry)." ], [ "The dimension of the embedding layer was 256. The embedding layer was initialized with the word embeddings pretrained using the Web corpus. The input sentences were segmented into words by the morphological analyzer Juman++. The vocabulary size was 100,000. The number of hidden layers was 2. The dimension of hidden units was 256. The optimizer was Momentum SGD BIBREF21. The mini-batch size was 1024. We ran 100 epochs and selected the snapshot that achieved the highest score for the dev set." ], [ "We used a Japanese BERT model pretrained with Japanese Wikipedia. The input sentences were segmented into words by Juman++, and words were broken into subwords by applying BPE BIBREF20. The vocabulary size was 32,000. The maximum length of an input sequence was 128. The number of hidden layers was 12. The dimension of hidden units was 768. The number of self-attention heads was 12. The optimizer was Adam BIBREF19. The mini-batch size was 32. We ran 1 epoch." ] ] }
{ "question": [ "What is the seed lexicon?", "What are the results?", "How are relations used to propagate polarity?", "How big is the Japanese data?", "What are labels available in dataset for supervision?", "How big are improvements of supervszed learning results trained on smalled labeled data enhanced with proposed approach copared to basic approach?", "How does their model learn using mostly raw data?", "How big is seed lexicon used for training?", "How large is raw corpus used for training?" ], "question_id": [ "753990d0b621d390ed58f20c4d9e4f065f0dc672", "9d578ddccc27dd849244d632dd0f6bf27348ad81", "02e4bf719b1a504e385c35c6186742e720bcb281", "44c4bd6decc86f1091b5fc0728873d9324cdde4e", "86abeff85f3db79cf87a8c993e5e5aa61226dc98", "c029deb7f99756d2669abad0a349d917428e9c12", "39f8db10d949c6b477fa4b51e7c184016505884f", "d0bc782961567dc1dd7e074b621a6d6be44bb5b4", "a592498ba2fac994cd6fad7372836f0adb37e22a" ], "nlp_background": [ "two", "two", "two", "two", "zero", "zero", "zero", "zero", "zero" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar" ], "paper_read": [ "no", "no", "no", "no", "no", "no", "no", "no", "no" ], "search_query": [ "", "", "", "", "", "", "", "", "" ], "question_writer": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "a vocabulary of positive and negative predicates that helps determine the polarity score of an event", "evidence": [ "The seed lexicon consists of positive and negative predicates. If the predicate of an extracted event is in the seed lexicon and does not involve complex phenomena like negation, we assign the corresponding polarity score ($+1$ for positive events and $-1$ for negative events) to the event. We expect the model to automatically learn complex phenomena through label propagation. Based on the availability of scores and the types of discourse relations, we classify the extracted event pairs into the following three types." ], "highlighted_evidence": [ "The seed lexicon consists of positive and negative predicates. If the predicate of an extracted event is in the seed lexicon and does not involve complex phenomena like negation, we assign the corresponding polarity score ($+1$ for positive events and $-1$ for negative events) to the event.", "It is a " ] }, { "unanswerable": false, "extractive_spans": [ "seed lexicon consists of positive and negative predicates" ], "yes_no": null, "free_form_answer": "", "evidence": [ "The seed lexicon consists of positive and negative predicates. If the predicate of an extracted event is in the seed lexicon and does not involve complex phenomena like negation, we assign the corresponding polarity score ($+1$ for positive events and $-1$ for negative events) to the event. We expect the model to automatically learn complex phenomena through label propagation. Based on the availability of scores and the types of discourse relations, we classify the extracted event pairs into the following three types." ], "highlighted_evidence": [ "The seed lexicon consists of positive and negative predicates. If the predicate of an extracted event is in the seed lexicon and does not involve complex phenomena like negation, we assign the corresponding polarity score ($+1$ for positive events and $-1$ for negative events) to the event." ] } ], "annotation_id": [ "31e85022a847f37c15fd0415f3c450c74c8e4755", "95da0a6e1b08db74a405c6a71067c9b272a50ff5" ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "2cfd959e433f290bb50b55722370f0d22fe090b7" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "Using all data to train: AL -- BiGRU achieved 0.843 accuracy, AL -- BERT achieved 0.863 accuracy, AL+CA+CO -- BiGRU achieved 0.866 accuracy, AL+CA+CO -- BERT achieved 0.835, accuracy, ACP -- BiGRU achieved 0.919 accuracy, ACP -- BERT achived 0.933, accuracy, ACP+AL+CA+CO -- BiGRU achieved 0.917 accuracy, ACP+AL+CA+CO -- BERT achieved 0.913 accuracy. \nUsing a subset to train: BERT achieved 0.876 accuracy using ACP (6K), BERT achieved 0.886 accuracy using ACP (6K) + AL, BiGRU achieved 0.830 accuracy using ACP (6K), BiGRU achieved 0.879 accuracy using ACP (6K) + AL + CA + CO.", "evidence": [ "FLOAT SELECTED: Table 3: Performance of various models on the ACP test set.", "FLOAT SELECTED: Table 4: Results for small labeled training data. Given the performance with the full dataset, we show BERT trained only with the AL data.", "As for ${\\rm Encoder}$, we compared two types of neural networks: BiGRU and BERT. GRU BIBREF16 is a recurrent neural network sequence encoder. BiGRU reads an input sequence forward and backward and the output is the concatenation of the final forward and backward hidden states.", "We trained the model with the following four combinations of the datasets: AL, AL+CA+CO (two proposed models), ACP (supervised), and ACP+AL+CA+CO (semi-supervised). The corresponding objective functions were: $\\mathcal {L}_{\\rm AL}$, $\\mathcal {L}_{\\rm AL} + \\mathcal {L}_{\\rm CA} + \\mathcal {L}_{\\rm CO}$, $\\mathcal {L}_{\\rm ACP}$, and $\\mathcal {L}_{\\rm ACP} + \\mathcal {L}_{\\rm AL} + \\mathcal {L}_{\\rm CA} + \\mathcal {L}_{\\rm CO}$." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 3: Performance of various models on the ACP test set.", "FLOAT SELECTED: Table 4: Results for small labeled training data. Given the performance with the full dataset, we show BERT trained only with the AL data.", "As for ${\\rm Encoder}$, we compared two types of neural networks: BiGRU and BERT. ", "We trained the model with the following four combinations of the datasets: AL, AL+CA+CO (two proposed models), ACP (supervised), and ACP+AL+CA+CO (semi-supervised). The corresponding objective functions were: $\\mathcal {L}_{\\rm AL}$, $\\mathcal {L}_{\\rm AL} + \\mathcal {L}_{\\rm CA} + \\mathcal {L}_{\\rm CO}$, $\\mathcal {L}_{\\rm ACP}$, and $\\mathcal {L}_{\\rm ACP} + \\mathcal {L}_{\\rm AL} + \\mathcal {L}_{\\rm CA} + \\mathcal {L}_{\\rm CO}$." ] } ], "annotation_id": [ "1e5e867244ea656c4b7632628086209cf9bae5fa" ], "worker_id": [ "2cfd959e433f290bb50b55722370f0d22fe090b7" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "based on the relation between events, the suggested polarity of one event can determine the possible polarity of the other event ", "evidence": [ "In this paper, we propose a simple and effective method for learning affective events that only requires a very small seed lexicon and a large raw corpus. As illustrated in Figure FIGREF1, our key idea is that we can exploit discourse relations BIBREF4 to efficiently propagate polarity from seed predicates that directly report one's emotions (e.g., “to be glad” is positive). Suppose that events $x_1$ are $x_2$ are in the discourse relation of Cause (i.e., $x_1$ causes $x_2$). If the seed lexicon suggests $x_2$ is positive, $x_1$ is also likely to be positive because it triggers the positive emotion. The fact that $x_2$ is known to be negative indicates the negative polarity of $x_1$. Similarly, if $x_1$ and $x_2$ are in the discourse relation of Concession (i.e., $x_2$ in spite of $x_1$), the reverse of $x_2$'s polarity can be propagated to $x_1$. Even if $x_2$'s polarity is not known in advance, we can exploit the tendency of $x_1$ and $x_2$ to be of the same polarity (for Cause) or of the reverse polarity (for Concession) although the heuristic is not exempt from counterexamples. We transform this idea into objective functions and train neural network models that predict the polarity of a given event." ], "highlighted_evidence": [ "As illustrated in Figure FIGREF1, our key idea is that we can exploit discourse relations BIBREF4 to efficiently propagate polarity from seed predicates that directly report one's emotions (e.g., “to be glad” is positive). Suppose that events $x_1$ are $x_2$ are in the discourse relation of Cause (i.e., $x_1$ causes $x_2$). If the seed lexicon suggests $x_2$ is positive, $x_1$ is also likely to be positive because it triggers the positive emotion. The fact that $x_2$ is known to be negative indicates the negative polarity of $x_1$. Similarly, if $x_1$ and $x_2$ are in the discourse relation of Concession (i.e., $x_2$ in spite of $x_1$), the reverse of $x_2$'s polarity can be propagated to $x_1$. Even if $x_2$'s polarity is not known in advance, we can exploit the tendency of $x_1$ and $x_2$ to be of the same polarity (for Cause) or of the reverse polarity (for Concession) although the heuristic is not exempt from counterexamples. We transform this idea into objective functions and train neural network models that predict the polarity of a given event." ] }, { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "cause relation: both events in the relation should have the same polarity; concession relation: events should have opposite polarity", "evidence": [ "In this paper, we propose a simple and effective method for learning affective events that only requires a very small seed lexicon and a large raw corpus. As illustrated in Figure FIGREF1, our key idea is that we can exploit discourse relations BIBREF4 to efficiently propagate polarity from seed predicates that directly report one's emotions (e.g., “to be glad” is positive). Suppose that events $x_1$ are $x_2$ are in the discourse relation of Cause (i.e., $x_1$ causes $x_2$). If the seed lexicon suggests $x_2$ is positive, $x_1$ is also likely to be positive because it triggers the positive emotion. The fact that $x_2$ is known to be negative indicates the negative polarity of $x_1$. Similarly, if $x_1$ and $x_2$ are in the discourse relation of Concession (i.e., $x_2$ in spite of $x_1$), the reverse of $x_2$'s polarity can be propagated to $x_1$. Even if $x_2$'s polarity is not known in advance, we can exploit the tendency of $x_1$ and $x_2$ to be of the same polarity (for Cause) or of the reverse polarity (for Concession) although the heuristic is not exempt from counterexamples. We transform this idea into objective functions and train neural network models that predict the polarity of a given event.", "The seed lexicon consists of positive and negative predicates. If the predicate of an extracted event is in the seed lexicon and does not involve complex phenomena like negation, we assign the corresponding polarity score ($+1$ for positive events and $-1$ for negative events) to the event. We expect the model to automatically learn complex phenomena through label propagation. Based on the availability of scores and the types of discourse relations, we classify the extracted event pairs into the following three types." ], "highlighted_evidence": [ "As illustrated in Figure FIGREF1, our key idea is that we can exploit discourse relations BIBREF4 to efficiently propagate polarity from seed predicates that directly report one's emotions (e.g., “to be glad” is positive). Suppose that events $x_1$ are $x_2$ are in the discourse relation of Cause (i.e., $x_1$ causes $x_2$). If the seed lexicon suggests $x_2$ is positive, $x_1$ is also likely to be positive because it triggers the positive emotion. The fact that $x_2$ is known to be negative indicates the negative polarity of $x_1$. Similarly, if $x_1$ and $x_2$ are in the discourse relation of Concession (i.e., $x_2$ in spite of $x_1$), the reverse of $x_2$'s polarity can be propagated to $x_1$. Even if $x_2$'s polarity is not known in advance, we can exploit the tendency of $x_1$ and $x_2$ to be of the same polarity (for Cause) or of the reverse polarity (for Concession) although the heuristic is not exempt from counterexamples. We transform this idea into objective functions and train neural network models that predict the polarity of a given event.", "The seed lexicon consists of positive and negative predicates. If the predicate of an extracted event is in the seed lexicon and does not involve complex phenomena like negation, we assign the corresponding polarity score ($+1$ for positive events and $-1$ for negative events) to the event. We expect the model to automatically learn complex phenomena through label propagation." ] } ], "annotation_id": [ "49a78a07d2eed545556a835ccf2eb40e5eee9801", "acd6d15bd67f4b1496ee8af1c93c33e7d59c89e1" ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "2cfd959e433f290bb50b55722370f0d22fe090b7" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "7000000 pairs of events were extracted from the Japanese Web corpus, 529850 pairs of events were extracted from the ACP corpus", "evidence": [ "As a raw corpus, we used a Japanese web corpus that was compiled through the procedures proposed by BIBREF13. To extract event pairs tagged with discourse relations, we used the Japanese dependency parser KNP and in-house postprocessing scripts BIBREF14. KNP used hand-written rules to segment each sentence into what we conventionally called clauses (mostly consecutive text chunks), each of which contained one main predicate. KNP also identified the discourse relations of event pairs if explicit discourse connectives BIBREF4 such as “ので” (because) and “のに” (in spite of) were present. We treated Cause/Reason (原因・理由) and Condition (条件) in the original tagset BIBREF15 as Cause and Concession (逆接) as Concession, respectively. Here is an example of event pair extraction.", "We constructed our seed lexicon consisting of 15 positive words and 15 negative words, as shown in Section SECREF27. From the corpus of about 100 million sentences, we obtained 1.4 millions event pairs for AL, 41 millions for CA, and 6 millions for CO. We randomly selected subsets of AL event pairs such that positive and negative latter events were equal in size. We also sampled event pairs for each of CA and CO such that it was five times larger than AL. The results are shown in Table TABREF16.", "FLOAT SELECTED: Table 1: Statistics of the AL, CA, and CO datasets.", "We used the latest version of the ACP Corpus BIBREF12 for evaluation. It was used for (semi-)supervised training as well. Extracted from Japanese websites using HTML layouts and linguistic patterns, the dataset covered various genres. For example, the following two sentences were labeled positive and negative, respectively:", "Although the ACP corpus was originally constructed in the context of sentiment analysis, we found that it could roughly be regarded as a collection of affective events. We parsed each sentence and extracted the last clause in it. The train/dev/test split of the data is shown in Table TABREF19.", "FLOAT SELECTED: Table 2: Details of the ACP dataset." ], "highlighted_evidence": [ "As a raw corpus, we used a Japanese web corpus that was compiled through the procedures proposed by BIBREF13. ", "From the corpus of about 100 million sentences, we obtained 1.4 millions event pairs for AL, 41 millions for CA, and 6 millions for CO. We randomly selected subsets of AL event pairs such that positive and negative latter events were equal in size. We also sampled event pairs for each of CA and CO such that it was five times larger than AL. The results are shown in Table TABREF16.", "FLOAT SELECTED: Table 1: Statistics of the AL, CA, and CO datasets.", "We used the latest version of the ACP Corpus BIBREF12 for evaluation. It was used for (semi-)supervised training as well.", "Although the ACP corpus was originally constructed in the context of sentiment analysis, we found that it could roughly be regarded as a collection of affective events. We parsed each sentence and extracted the last clause in it. The train/dev/test split of the data is shown in Table TABREF19.", "FLOAT SELECTED: Table 2: Details of the ACP dataset." ] }, { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "The ACP corpus has around 700k events split into positive and negative polarity ", "evidence": [ "FLOAT SELECTED: Table 2: Details of the ACP dataset." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 2: Details of the ACP dataset." ] } ], "annotation_id": [ "36926a4c9e14352c91111150aa4c6edcc5c0770f", "75b6dd28ccab20a70087635d89c2b22d0e99095c" ], "worker_id": [ "2cfd959e433f290bb50b55722370f0d22fe090b7", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "negative", "positive" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Affective events BIBREF0 are events that typically affect people in positive or negative ways. For example, getting money and playing sports are usually positive to the experiencers; catching cold and losing one's wallet are negative. Understanding affective events is important to various natural language processing (NLP) applications such as dialogue systems BIBREF1, question-answering systems BIBREF2, and humor recognition BIBREF3. In this paper, we work on recognizing the polarity of an affective event that is represented by a score ranging from $-1$ (negative) to 1 (positive)." ], "highlighted_evidence": [ "In this paper, we work on recognizing the polarity of an affective event that is represented by a score ranging from $-1$ (negative) to 1 (positive)." ] } ], "annotation_id": [ "2d8c7df145c37aad905e48f64d8caa69e54434d4" ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "3%", "evidence": [ "FLOAT SELECTED: Table 4: Results for small labeled training data. Given the performance with the full dataset, we show BERT trained only with the AL data." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 4: Results for small labeled training data. Given the performance with the full dataset, we show BERT trained only with the AL data." ] } ], "annotation_id": [ "df4372b2e8d9bb2039a5582f192768953b01d904" ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "by exploiting discourse relations to propagate polarity from seed predicates to final sentiment polarity", "evidence": [ "In this paper, we propose a simple and effective method for learning affective events that only requires a very small seed lexicon and a large raw corpus. As illustrated in Figure FIGREF1, our key idea is that we can exploit discourse relations BIBREF4 to efficiently propagate polarity from seed predicates that directly report one's emotions (e.g., “to be glad” is positive). Suppose that events $x_1$ are $x_2$ are in the discourse relation of Cause (i.e., $x_1$ causes $x_2$). If the seed lexicon suggests $x_2$ is positive, $x_1$ is also likely to be positive because it triggers the positive emotion. The fact that $x_2$ is known to be negative indicates the negative polarity of $x_1$. Similarly, if $x_1$ and $x_2$ are in the discourse relation of Concession (i.e., $x_2$ in spite of $x_1$), the reverse of $x_2$'s polarity can be propagated to $x_1$. Even if $x_2$'s polarity is not known in advance, we can exploit the tendency of $x_1$ and $x_2$ to be of the same polarity (for Cause) or of the reverse polarity (for Concession) although the heuristic is not exempt from counterexamples. We transform this idea into objective functions and train neural network models that predict the polarity of a given event." ], "highlighted_evidence": [ "In this paper, we propose a simple and effective method for learning affective events that only requires a very small seed lexicon and a large raw corpus. As illustrated in Figure FIGREF1, our key idea is that we can exploit discourse relations BIBREF4 to efficiently propagate polarity from seed predicates that directly report one's emotions (e.g., “to be glad” is positive)." ] } ], "annotation_id": [ "5c5bbc8af91c16af89b4ddd57ee6834be018e4e7" ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "30 words", "evidence": [ "We constructed our seed lexicon consisting of 15 positive words and 15 negative words, as shown in Section SECREF27. From the corpus of about 100 million sentences, we obtained 1.4 millions event pairs for AL, 41 millions for CA, and 6 millions for CO. We randomly selected subsets of AL event pairs such that positive and negative latter events were equal in size. We also sampled event pairs for each of CA and CO such that it was five times larger than AL. The results are shown in Table TABREF16." ], "highlighted_evidence": [ "We constructed our seed lexicon consisting of 15 positive words and 15 negative words, as shown in Section SECREF27. " ] } ], "annotation_id": [ "0206f2131f64a3e02498cedad1250971b78ffd0c" ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "100 million sentences" ], "yes_no": null, "free_form_answer": "", "evidence": [ "As a raw corpus, we used a Japanese web corpus that was compiled through the procedures proposed by BIBREF13. To extract event pairs tagged with discourse relations, we used the Japanese dependency parser KNP and in-house postprocessing scripts BIBREF14. KNP used hand-written rules to segment each sentence into what we conventionally called clauses (mostly consecutive text chunks), each of which contained one main predicate. KNP also identified the discourse relations of event pairs if explicit discourse connectives BIBREF4 such as “ので” (because) and “のに” (in spite of) were present. We treated Cause/Reason (原因・理由) and Condition (条件) in the original tagset BIBREF15 as Cause and Concession (逆接) as Concession, respectively. Here is an example of event pair extraction.", "We constructed our seed lexicon consisting of 15 positive words and 15 negative words, as shown in Section SECREF27. From the corpus of about 100 million sentences, we obtained 1.4 millions event pairs for AL, 41 millions for CA, and 6 millions for CO. We randomly selected subsets of AL event pairs such that positive and negative latter events were equal in size. We also sampled event pairs for each of CA and CO such that it was five times larger than AL. The results are shown in Table TABREF16." ], "highlighted_evidence": [ "As a raw corpus, we used a Japanese web corpus that was compiled through the procedures proposed by BIBREF13. ", "From the corpus of about 100 million sentences, we obtained 1.4 millions event pairs for AL, 41 millions for CA, and 6 millions for CO." ] } ], "annotation_id": [ "c36bad2758c4f9866d64c357c475d370595d937f" ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] } ] }
2003.07723
PO-EMO: Conceptualization, Annotation, and Modeling of Aesthetic Emotions in German and English Poetry
Most approaches to emotion analysis regarding social media, literature, news, and other domains focus exclusively on basic emotion categories as defined by Ekman or Plutchik. However, art (such as literature) enables engagement in a broader range of more complex and subtle emotions that have been shown to also include mixed emotional responses. We consider emotions as they are elicited in the reader, rather than what is expressed in the text or intended by the author. Thus, we conceptualize a set of aesthetic emotions that are predictive of aesthetic appreciation in the reader, and allow the annotation of multiple labels per line to capture mixed emotions within context. We evaluate this novel setting in an annotation experiment both with carefully trained experts and via crowdsourcing. Our annotation with experts leads to an acceptable agreement of kappa=.70, resulting in a consistent dataset for future large scale analysis. Finally, we conduct first emotion classification experiments based on BERT, showing that identifying aesthetic emotions is challenging in our data, with up to .52 F1-micro on the German subset. Data and resources are available at https://github.com/tnhaider/poetry-emotion
{ "section_name": [ "", " ::: ", " ::: ::: ", "Introduction", "Related Work ::: Poetry in Natural Language Processing", "Related Work ::: Emotion Annotation", "Related Work ::: Emotion Classification", "Data Collection", "Data Collection ::: German", "Data Collection ::: English", "Expert Annotation", "Expert Annotation ::: Workflow", "Expert Annotation ::: Emotion Labels", "Expert Annotation ::: Agreement", "Crowdsourcing Annotation", "Crowdsourcing Annotation ::: Data and Setup", "Crowdsourcing Annotation ::: Results", "Crowdsourcing Annotation ::: Comparing Experts with Crowds", "Modeling", "Concluding Remarks", "Acknowledgements", "Appendix", "Appendix ::: Friedrich Hölderlin: Hälfte des Lebens (1804)", "Appendix ::: Georg Trakl: In den Nachmittag geflüstert (1912)", "Appendix ::: Walt Whitman: O Captain! My Captain! (1865)" ], "paragraphs": [ [ "1.1em" ], [ "1.1.1em" ], [ "1.1.1.1em", "Thomas Haider$^{1,3}$, Steffen Eger$^2$, Evgeny Kim$^3$, Roman Klinger$^3$, Winfried Menninghaus$^1$", "$^{1}$Department of Language and Literature, Max Planck Institute for Empirical Aesthetics", "$^{2}$NLLG, Department of Computer Science, Technische Universitat Darmstadt", "$^{3}$Institut für Maschinelle Sprachverarbeitung, University of Stuttgart", "{thomas.haider, w.m}@ae.mpg.de, eger@aiphes.tu-darmstadt.de", "{roman.klinger, evgeny.kim}@ims.uni-stuttgart.de", "Most approaches to emotion analysis regarding social media, literature, news, and other domains focus exclusively on basic emotion categories as defined by Ekman or Plutchik. However, art (such as literature) enables engagement in a broader range of more complex and subtle emotions that have been shown to also include mixed emotional responses. We consider emotions as they are elicited in the reader, rather than what is expressed in the text or intended by the author. Thus, we conceptualize a set of aesthetic emotions that are predictive of aesthetic appreciation in the reader, and allow the annotation of multiple labels per line to capture mixed emotions within context. We evaluate this novel setting in an annotation experiment both with carefully trained experts and via crowdsourcing. Our annotation with experts leads to an acceptable agreement of $\\kappa =.70$, resulting in a consistent dataset for future large scale analysis. Finally, we conduct first emotion classification experiments based on BERT, showing that identifying aesthetic emotions is challenging in our data, with up to .52 F1-micro on the German subset. Data and resources are available at https://github.com/tnhaider/poetry-emotion.", "Emotion, Aesthetic Emotions, Literature, Poetry, Annotation, Corpora, Emotion Recognition, Multi-Label" ], [ "Emotions are central to human experience, creativity and behavior. Models of affect and emotion, both in psychology and natural language processing, commonly operate on predefined categories, designated either by continuous scales of, e.g., Valence, Arousal and Dominance BIBREF0 or discrete emotion labels (which can also vary in intensity). Discrete sets of emotions often have been motivated by theories of basic emotions, as proposed by Ekman1992—Anger, Fear, Joy, Disgust, Surprise, Sadness—and Plutchik1991, who added Trust and Anticipation. These categories are likely to have evolved as they motivate behavior that is directly relevant for survival. However, art reception typically presupposes a situation of safety and therefore offers special opportunities to engage in a broader range of more complex and subtle emotions. These differences between real-life and art contexts have not been considered in natural language processing work so far.", "To emotionally move readers is considered a prime goal of literature since Latin antiquity BIBREF1, BIBREF2, BIBREF3. Deeply moved readers shed tears or get chills and goosebumps even in lab settings BIBREF4. In cases like these, the emotional response actually implies an aesthetic evaluation: narratives that have the capacity to move readers are evaluated as good and powerful texts for this very reason. Similarly, feelings of suspense experienced in narratives not only respond to the trajectory of the plot's content, but are also directly predictive of aesthetic liking (or disliking). Emotions that exhibit this dual capacity have been defined as “aesthetic emotions” BIBREF2. Contrary to the negativity bias of classical emotion catalogues, emotion terms used for aesthetic evaluation purposes include far more positive than negative emotions. At the same time, many overall positive aesthetic emotions encompass negative or mixed emotional ingredients BIBREF2, e.g., feelings of suspense include both hopeful and fearful anticipations.", "For these reasons, we argue that the analysis of literature (with a focus on poetry) should rely on specifically selected emotion items rather than on the narrow range of basic emotions only. Our selection is based on previous research on this issue in psychological studies on art reception and, specifically, on poetry. For instance, knoop2016mapping found that Beauty is a major factor in poetry reception.", "We primarily adopt and adapt emotion terms that schindler2017measuring have identified as aesthetic emotions in their study on how to measure and categorize such particular affective states. Further, we consider the aspect that, when selecting specific emotion labels, the perspective of annotators plays a major role. Whether emotions are elicited in the reader, expressed in the text, or intended by the author largely changes the permissible labels. For example, feelings of Disgust or Love might be intended or expressed in the text, but the text might still fail to elicit corresponding feelings as these concepts presume a strong reaction in the reader. Our focus here was on the actual emotional experience of the readers rather than on hypothetical intentions of authors. We opted for this reader perspective based on previous research in NLP BIBREF5, BIBREF6 and work in empirical aesthetics BIBREF7, that specifically measured the reception of poetry. Our final set of emotion labels consists of Beauty/Joy, Sadness, Uneasiness, Vitality, Suspense, Awe/Sublime, Humor, Annoyance, and Nostalgia.", "In addition to selecting an adapted set of emotions, the annotation of poetry brings further challenges, one of which is the choice of the appropriate unit of annotation. Previous work considers words BIBREF8, BIBREF9, sentences BIBREF10, BIBREF11, utterances BIBREF12, sentence triples BIBREF13, or paragraphs BIBREF14 as the units of annotation. For poetry, reasonable units follow the logical document structure of poems, i.e., verse (line), stanza, and, owing to its relative shortness, the complete text. The more coarse-grained the unit, the more difficult the annotation is likely to be, but the more it may also enable the annotation of emotions in context. We find that annotating fine-grained units (lines) that are hierarchically ordered within a larger context (stanza, poem) caters to the specific structure of poems, where emotions are regularly mixed and are more interpretable within the whole poem. Consequently, we allow the mixing of emotions already at line level through multi-label annotation.", "The remainder of this paper includes (1) a report of the annotation process that takes these challenges into consideration, (2) a description of our annotated corpora, and (3) an implementation of baseline models for the novel task of aesthetic emotion annotation in poetry. In a first study, the annotators work on the annotations in a closely supervised fashion, carefully reading each verse, stanza, and poem. In a second study, the annotations are performed via crowdsourcing within relatively short time periods with annotators not seeing the entire poem while reading the stanza. Using these two settings, we aim at obtaining a better understanding of the advantages and disadvantages of an expert vs. crowdsourcing setting in this novel annotation task. Particularly, we are interested in estimating the potential of a crowdsourcing environment for the task of self-perceived emotion annotation in poetry, given time and cost overhead associated with in-house annotation process (that usually involve training and close supervision of the annotators).", "We provide the final datasets of German and English language poems annotated with reader emotions on verse level at https://github.com/tnhaider/poetry-emotion." ], [ "Natural language understanding research on poetry has investigated stylistic variation BIBREF15, BIBREF16, BIBREF17, with a focus on broadly accepted formal features such as meter BIBREF18, BIBREF19, BIBREF20 and rhyme BIBREF21, BIBREF22, as well as enjambement BIBREF23, BIBREF24 and metaphor BIBREF25, BIBREF26. Recent work has also explored the relationship of poetry and prose, mainly on a syntactic level BIBREF27, BIBREF28. Furthermore, poetry also lends itself well to semantic (change) analysis BIBREF29, BIBREF30, as linguistic invention BIBREF31, BIBREF32 and succinctness BIBREF33 are at the core of poetic production.", "Corpus-based analysis of emotions in poetry has been considered, but there is no work on German, and little on English. kao2015computational analyze English poems with word associations from the Harvard Inquirer and LIWC, within the categories positive/negative outlook, positive/negative emotion and phys./psych. well-being. hou-frank-2015-analyzing examine the binary sentiment polarity of Chinese poems with a weighted personalized PageRank algorithm. barros2013automatic followed a tagging approach with a thesaurus to annotate words that are similar to the words Joy', Anger', Fear' and Sadness' (moreover translating these from English to Spanish). With these word lists, they distinguish the categories Love', Songs to Lisi', Satire' and Philosophical-Moral-Religious' in Quevedo's poetry. Similarly, alsharif2013emotion classify unique Arabic emotional text forms' based on word unigrams.", "Mohanty2018 create a corpus of 788 poems in the Indian Odia language, annotate it on text (poem) level with binary negative and positive sentiment, and are able to distinguish these with moderate success. Sreeja2019 construct a corpus of 736 Indian language poems and annotate the texts on Ekman's six categories + Love + Courage. They achieve a Fleiss Kappa of .48.", "In contrast to our work, these studies focus on basic emotions and binary sentiment polarity only, rather than addressing aesthetic emotions. Moreover, they annotate on the level of complete poems (instead of fine-grained verse and stanza-level)." ], [ "Emotion corpora have been created for different tasks and with different annotation strategies, with different units of analysis and different foci of emotion perspective (reader, writer, text). Examples include the ISEAR dataset BIBREF34 (document-level); emotion annotation in children stories BIBREF10 and news headlines BIBREF35 (sentence-level); and fine-grained emotion annotation in literature by Kim2018 (phrase- and word-level). We refer the interested reader to an overview paper on existing corpora BIBREF36.", "We are only aware of a limited number of publications which look in more depth into the emotion perspective. buechel-hahn-2017-emobank report on an annotation study that focuses both on writer's and reader's emotions associated with English sentences. The results show that the reader perspective yields better inter-annotator agreement. Yang2009 also study the difference between writer and reader emotions, but not with a modeling perspective. The authors find that positive reader emotions tend to be linked to positive writer emotions in online blogs." ], [ "The task of emotion classification has been tackled before using rule-based and machine learning approaches. Rule-based emotion classification typically relies on lexical resources of emotionally charged words BIBREF9, BIBREF37, BIBREF8 and offers a straightforward and transparent way to detect emotions in text.", "In contrast to rule-based approaches, current models for emotion classification are often based on neural networks and commonly use word embeddings as features. Schuff2017 applied models from the classes of CNN, BiLSTM, and LSTM and compare them to linear classifiers (SVM and MaxEnt), where the BiLSTM shows best results with the most balanced precision and recall. AbdulMageed2017 claim the highest F$_1$ with gated recurrent unit networks BIBREF38 for Plutchik's emotion model. More recently, shared tasks on emotion analysis BIBREF39, BIBREF40 triggered a set of more advanced deep learning approaches, including BERT BIBREF41 and other transfer learning methods BIBREF42." ], [ "For our annotation and modeling studies, we build on top of two poetry corpora (in English and German), which we refer to as PO-EMO. This collection represents important contributions to the literary canon over the last 400 years. We make this resource available in TEI P5 XML and an easy-to-use tab separated format. Table TABREF9 shows a size overview of these data sets. Figure FIGREF8 shows the distribution of our data over time via density plots. Note that both corpora show a relative underrepresentation before the onset of the romantic period (around 1750)." ], [ "The German corpus contains poems available from the website lyrik.antikoerperchen.de (ANTI-K), which provides a platform for students to upload essays about poems. The data is available in the Hypertext Markup Language, with clean line and stanza segmentation. ANTI-K also has extensive metadata, including author names, years of publication, numbers of sentences, poetic genres, and literary periods, that enable us to gauge the distribution of poems according to periods. The 158 poems we consider (731 stanzas) are dispersed over 51 authors and the New High German timeline (1575–1936 A.D.). This data has been annotated, besides emotions, for meter, rhythm, and rhyme in other studies BIBREF22, BIBREF43." ], [ "The English corpus contains 64 poems of popular English writers. It was partly collected from Project Gutenberg with the GutenTag tool, and, in addition, includes a number of hand selected poems from the modern period and represents a cross section of popular English poets. We took care to include a number of female authors, who would have been underrepresented in a uniform sample. Time stamps in the corpus are organized by the birth year of the author, as assigned in Project Gutenberg." ], [ "In the following, we will explain how we compiled and annotated three data subsets, namely, (1) 48 German poems with gold annotation. These were originally annotated by three annotators. The labels were then aggregated with majority voting and based on discussions among the annotators. Finally, they were curated to only include one gold annotation. (2) The remaining 110 German poems that are used to compute the agreement in table TABREF20 and (3) 64 English poems contain the raw annotation from two annotators.", "We report the genesis of our annotation guidelines including the emotion classes. With the intention to provide a language resource for the computational analysis of emotion in poetry, we aimed at maximizing the consistency of our annotation, while doing justice to the diversity of poetry. We iteratively improved the guidelines and the annotation workflow by annotating in batches, cleaning the class set, and the compilation of a gold standard. The final overall cost of producing this expert annotated dataset amounts to approximately 3,500." ], [ "The annotation process was initially conducted by three female university students majoring in linguistics and/or literary studies, which we refer to as our “expert annotators”. We used the INCePTION platform for annotation BIBREF44. Starting with the German poems, we annotated in batches of about 16 (and later in some cases 32) poems. After each batch, we computed agreement statistics including heatmaps, and provided this feedback to the annotators. For the first three batches, the three annotators produced a gold standard using a majority vote for each line. Where this was inconclusive, they developed an adjudicated annotation based on discussion. Where necessary, we encouraged the annotators to aim for more consistency, as most of the frequent switching of emotions within a stanza could not be reconstructed or justified.", "In poems, emotions are regularly mixed (already on line level) and are more interpretable within the whole poem. We therefore annotate lines hierarchically within the larger context of stanzas and the whole poem. Hence, we instruct the annotators to read a complete stanza or full poem, and then annotate each line in the context of its stanza. To reflect on the emotional complexity of poetry, we allow a maximum of two labels per line while avoiding heavy label fluctuations by encouraging annotators to reflect on their feelings to avoid empty' annotations. Rather, they were advised to use fewer labels and more consistent annotation. This additional constraint is necessary to avoid “wild”, non-reconstructable or non-justified annotations.", "All subsequent batches (all except the first three) were only annotated by two out of the three initial annotators, coincidentally those two who had the lowest initial agreement with each other. We asked these two experts to use the generated gold standard (48 poems; majority votes of 3 annotators plus manual curation) as a reference (“if in doubt, annotate according to the gold standard”). This eliminated some systematic differences between them and markedly improved the agreement levels, roughly from 0.3–0.5 Cohen's $\\kappa$ in the first three batches to around 0.6–0.8 $\\kappa$ for all subsequent batches. This annotation procedure relaxes the reader perspective, as we encourage annotators (if in doubt) to annotate how they think the other annotators would annotate. However, we found that this formulation improves the usability of the data and leads to a more consistent annotation." ], [ "We opt for measuring the reader perspective rather than the text surface or author's intent. To closer define and support conceptualizing our labels, we use particular items', as they are used in psychological self-evaluations. These items consist of adjectives, verbs or short phrases. We build on top of schindler2017measuring who proposed 43 items that were then grouped by a factor analysis based on self-evaluations of participants. The resulting factors are shown in Table TABREF17. We attempt to cover all identified factors and supplement with basic emotions BIBREF46, BIBREF47, where possible.", "We started with a larger set of labels to then delete and substitute (tone down) labels during the initial annotation process to avoid infrequent classes and inconsistencies. Further, we conflate labels if they show considerable confusion with each other. These iterative improvements particularly affected Confusion, Boredom and Other that were very infrequently annotated and had little agreement among annotators ($\\kappa <.2$). For German, we also removed Nostalgia ($\\kappa =.218$) after gold standard creation, but after consideration, added it back for English, then achieving agreement. Nostalgia is still available in the gold standard (then with a second label Beauty/Joy or Sadness to keep consistency). However, Confusion, Boredom and Other are not available in any sub-corpus.", "Our final set consists of nine classes, i.e., (in order of frequency) Beauty/Joy, Sadness, Uneasiness, Vitality, Suspense, Awe/Sublime, Humor, Annoyance, and Nostalgia. In the following, we describe the labels and give further details on the aggregation process.", "Annoyance (annoys me/angers me/felt frustrated): Annoyance implies feeling annoyed, frustrated or even angry while reading the line/stanza. We include the class Anger here, as this was found to be too strong in intensity.", "Awe/Sublime (found it overwhelming/sense of greatness): Awe/Sublime implies being overwhelmed by the line/stanza, i.e., if one gets the impression of facing something sublime or if the line/stanza inspires one with awe (or that the expression itself is sublime). Such emotions are often associated with subjects like god, death, life, truth, etc. The term Sublime originated with kant2000critique as one of the first aesthetic emotion terms. Awe is a more common English term.", "Beauty/Joy (found it beautiful/pleasing/makes me happy/joyful): kant2000critique already spoke of a “feeling of beauty”, and it should be noted that it is not a merely pleasing emotion'. Therefore, in our pilot annotations, Beauty and Joy were separate labels. However, schindler2017measuring found that items for Beauty and Joy load into the same factors. Furthermore, our pilot annotations revealed, while Beauty is the more dominant and frequent feeling, both labels regularly accompany each other, and they often get confused across annotators. Therefore, we add Joy to form an inclusive label Beauty/Joy that increases annotation consistency.", "Humor (found it funny/amusing): Implies feeling amused by the line/stanza or if it makes one laugh.", "Nostalgia (makes me nostalgic): Nostalgia is defined as a sentimental longing for things, persons or situations in the past. It often carries both positive and negative feelings. However, since this label is quite infrequent, and not available in all subsets of the data, we annotated it with an additional Beauty/Joy or Sadness label to ensure annotation consistency.", "Sadness (makes me sad/touches me): If the line/stanza makes one feel sad. It also includes a more general being touched / moved'.", "Suspense (found it gripping/sparked my interest): Choose Suspense if the line/stanza keeps one in suspense (if the line/stanza excites one or triggers one's curiosity). We further removed Anticipation from Suspense/Anticipation, as Anticipation appeared to us as being a more cognitive prediction whereas Suspense is a far more straightforward emotion item.", "Uneasiness (found it ugly/unsettling/disturbing / frightening/distasteful): This label covers situations when one feels discomfort about the line/stanza (if the line/stanza feels distasteful/ugly, unsettling/disturbing or frightens one). The labels Ugliness and Disgust were conflated into Uneasiness, as both are seldom felt in poetry (being inadequate/too strong/high in arousal), and typically lead to Uneasiness.", "Vitality (found it invigorating/spurs me on/inspires me): This label is meant for a line/stanza that has an inciting, encouraging effect (if the line/stanza conveys a feeling of movement, energy and vitality which animates to action). Similar terms are Activation and Stimulation." ], [ "Table TABREF20 shows the Cohen's $\\kappa$ agreement scores among our two expert annotators for each emotion category $e$ as follows. We assign each instance (a line in a poem) a binary label indicating whether or not the annotator has annotated the emotion category $e$ in question. From this, we obtain vectors $v_i^e$, for annotators $i=0,1$, where each entry of $v_i^e$ holds the binary value for the corresponding line. We then apply the $\\kappa$ statistics to the two binary vectors $v_i^e$. Additionally to averaged $\\kappa$, we report micro-F1 values in Table TABREF21 between the multi-label annotations of both expert annotators as well as the micro-F1 score of a random baseline as well as of the majority emotion baseline (which labels each line as Beauty/Joy).", "We find that Cohen $\\kappa$ agreement ranges from .84 for Uneasiness in the English data, .81 for Humor and Nostalgia, down to German Suspense (.65), Awe/Sublime (.61) and Vitality for both languages (.50 English, .63 German). Both annotators have a similar emotion frequency profile, where the ranking is almost identical, especially for German. However, for English, Annotator 2 annotates more Vitality than Uneasiness. Figure FIGREF18 shows the confusion matrices of labels between annotators as heatmaps. Notably, Beauty/Joy and Sadness are confused across annotators more often than other labels. This is topical for poetry, and therefore not surprising: One might argue that the beauty of beings and situations is only beautiful because it is not enduring and therefore not to divorce from the sadness of the vanishing of beauty BIBREF48. We also find considerable confusion of Sadness with Awe/Sublime and Vitality, while the latter is also regularly confused with Beauty/Joy.", "Furthermore, as shown in Figure FIGREF23, we find that no single poem aggregates to more than six emotion labels, while no stanza aggregates to more than four emotion labels. However, most lines and stanzas prefer one or two labels. German poems seem more emotionally diverse where more poems have three labels than two labels, while the majority of English poems have only two labels. This is however attributable to the generally shorter English texts." ], [ "After concluding the expert annotation, we performed a focused crowdsourcing experiment, based on the final label set and items as they are listed in Table TABREF27 and Section SECREF19. With this experiment, we aim to understand whether it is possible to collect reliable judgements for aesthetic perception of poetry from a crowdsourcing platform. A second goal is to see whether we can replicate the expensive expert annotations with less costly crowd annotations.", "We opted for a maximally simple annotation environment, where we asked participants to annotate English 4-line stanzas with self-perceived reader emotions. We choose English due to the higher availability of English language annotators on crowdsourcing platforms. Each annotator rates each stanza independently of surrounding context." ], [ "For consistency and to simplify the task for the annotators, we opt for a trade-off between completeness and granularity of the annotation. Specifically, we subselect stanzas composed of four verses from the corpus of 64 hand selected English poems. The resulting selection of 59 stanzas is uploaded to Figure Eight for annotation.", "The annotators are asked to answer the following questions for each instance.", "Question 1 (single-choice): Read the following stanza and decide for yourself which emotions it evokes.", "Question 2 (multiple-choice): Which additional emotions does the stanza evoke?", "The answers to both questions correspond to the emotion labels we defined to use in our annotation, as described in Section SECREF19. We add an additional answer choice “None” to Question 2 to allow annotators to say that a stanza does not evoke any additional emotions.", "Each instance is annotated by ten people. We restrict the task geographically to the United Kingdom and Ireland and set the internal parameters on Figure Eight to only include the highest quality annotators to join the task. We pay 0.09 per instance. The final cost of the crowdsourcing experiment is 74." ], [ "In the following, we determine the best aggregation strategy regarding the 10 annotators with bootstrap resampling. For instance, one could assign the label of a specific emotion to an instance if just one annotators picks it, or one could assign the label only if all annotators agree on this emotion. To evaluate this, we repeatedly pick two sets of 5 annotators each out of the 10 annotators for each of the 59 stanzas, 1000 times overall (i.e., 1000$\\times$59 times, bootstrap resampling). For each of these repetitions, we compare the agreement of these two groups of 5 annotators. Each group gets assigned with an adjudicated emotion which is accepted if at least one annotator picks it, at least two annotators pick it, etc. up to all five pick it.", "We show the results in Table TABREF27. The $\\kappa$ scores show the average agreement between the two groups of five annotators, when the adjudicated class is picked based on the particular threshold of annotators with the same label choice. We see that some emotions tend to have higher agreement scores than others, namely Annoyance (.66), Sadness (up to .52), and Awe/Sublime, Beauty/Joy, Humor (all .46). The maximum agreement is reached mostly with a threshold of 2 (4 times) or 3 (3 times).", "We further show in the same table the average numbers of labels from each strategy. Obviously, a lower threshold leads to higher numbers (corresponding to a disjunction of annotations for each emotion). The drop in label counts is comparably drastic, with on average 18 labels per class. Overall, the best average $\\kappa$ agreement (.32) is less than half of what we saw for the expert annotators (roughly .70). Crowds especially disagree on many more intricate emotion labels (Uneasiness, Vitality, Nostalgia, Suspense).", "We visualize how often two emotions are used to label an instance in a confusion table in Figure FIGREF18. Sadness is used most often to annotate a stanza, and it is often confused with Suspense, Uneasiness, and Nostalgia. Further, Beauty/Joy partially overlaps with Awe/Sublime, Nostalgia, and Sadness.", "On average, each crowd annotator uses two emotion labels per stanza (56% of cases); only in 36% of the cases the annotators use one label, and in 6% and 1% of the cases three and four labels, respectively. This contrasts with the expert annotators, who use one label in about 70% of the cases and two labels in 30% of the cases for the same 59 four-liners. Concerning frequency distribution for emotion labels, both experts and crowds name Sadness and Beauty/Joy as the most frequent emotions (for the best' threshold of 3) and Nostalgia as one of the least frequent emotions. The Spearman rank correlation between experts and crowds is about 0.55 with respect to the label frequency distribution, indicating that crowds could replace experts to a moderate degree when it comes to extracting, e.g., emotion distributions for an author or time period. Now, we further compare crowds and experts in terms of whether crowds could replicate expert annotations also on a finer stanza level (rather than only on a distributional level)." ], [ "To gauge the quality of the crowd annotations in comparison with our experts, we calculate agreement on the emotions between experts and an increasing group size from the crowd. For each stanza instance $s$, we pick $N$ crowd workers, where $N\\in \\lbrace 4,6,8,10\\rbrace$, then pick their majority emotion for $s$, and additionally pick their second ranked majority emotion if at least $\\frac{N}{2}-1$ workers have chosen it. For the experts, we aggregate their emotion labels on stanza level, then perform the same strategy for selection of emotion labels. Thus, for $s$, both crowds and experts have 1 or 2 emotions. For each emotion, we then compute Cohen's $\\kappa$ as before. Note that, compared to our previous experiments in Section SECREF26 with a threshold, each stanza now receives an emotion annotation (exactly one or two emotion labels), both by the experts and the crowd-workers.", "In Figure FIGREF30, we plot agreement between experts and crowds on stanza level as we vary the number $N$ of crowd workers involved. On average, there is roughly a steady linear increase in agreement as $N$ grows, which may indicate that $N=20$ or $N=30$ would still lead to better agreement. Concerning individual emotions, Nostalgia is the emotion with the least agreement, as opposed to Sadness (in our sample of 59 four-liners): the agreement for this emotion grows from $.47$ $\\kappa$ with $N=4$ to $.65$ $\\kappa$ with $N=10$. Sadness is also the most frequent emotion, both according to experts and crowds. Other emotions for which a reasonable agreement is achieved are Annoyance, Awe/Sublime, Beauty/Joy, Humor ($\\kappa$ > 0.2). Emotions with little agreement are Vitality, Uneasiness, Suspense, Nostalgia ($\\kappa$ < 0.2).", "By and large, we note from Figure FIGREF18 that expert annotation is more restrictive, with experts agreeing more often on particular emotion labels (seen in the darker diagonal). The results of the crowdsourcing experiment, on the other hand, are a mixed bag as evidenced by a much sparser distribution of emotion labels. However, we note that these differences can be caused by 1) the disparate training procedure for the experts and crowds, and 2) the lack of opportunities for close supervision and on-going training of the crowds, as opposed to the in-house expert annotators.", "In general, however, we find that substituting experts with crowds is possible to a certain degree. Even though the crowds' labels look inconsistent at first view, there appears to be a good signal in their aggregated annotations, helping to approximate expert annotations to a certain degree. The average $\\kappa$ agreement (with the experts) we get from $N=10$ crowd workers (0.24) is still considerably below the agreement among the experts (0.70)." ], [ "To estimate the difficulty of automatic classification of our data set, we perform multi-label document classification (of stanzas) with BERT BIBREF41. For this experiment we aggregate all labels for a stanza and sort them by frequency, both for the gold standard and the raw expert annotations. As can be seen in Figure FIGREF23, a stanza bears a minimum of one and a maximum of four emotions. Unfortunately, the label Nostalgia is only available 16 times in the German data (the gold standard) as a second label (as discussed in Section SECREF19). None of our models was able to learn this label for German. Therefore we omit it, leaving us with eight proper labels.", "We use the code and the pre-trained BERT models of Farm, provided by deepset.ai. We test the multilingual-uncased model (Multiling), the german-base-cased model (Base), the german-dbmdz-uncased model (Dbmdz), and we tune the Base model on 80k stanzas of the German Poetry Corpus DLK BIBREF30 for 2 epochs, both on token (masked words) and sequence (next line) prediction (Base$_{\\textsc {Tuned}}$).", "We split the randomized German dataset so that each label is at least 10 times in the validation set (63 instances, 113 labels), and at least 10 times in the test set (56 instances, 108 labels) and leave the rest for training (617 instances, 946 labels). We train BERT for 10 epochs (with a batch size of 8), optimize with entropy loss, and report F1-micro on the test set. See Table TABREF36 for the results.", "We find that the multilingual model cannot handle infrequent categories, i.e., Awe/Sublime, Suspense and Humor. However, increasing the dataset with English data improves the results, suggesting that the classification would largely benefit from more annotated data. The best model overall is DBMDZ (.520), showing a balanced response on both validation and test set. See Table TABREF37 for a breakdown of all emotions as predicted by the this model. Precision is mostly higher than recall. The labels Awe/Sublime, Suspense and Humor are harder to predict than the other labels.", "The BASE and BASE$_{\\textsc {TUNED}}$ models perform slightly worse than DBMDZ. The effect of tuning of the BASE model is questionable, probably because of the restricted vocabulary (30k). We found that tuning on poetry does not show obvious improvements. Lastly, we find that models that were trained on lines (instead of stanzas) do not achieve the same F1 (~.42 for the German models)." ], [ "In this paper, we presented a dataset of German and English poetry annotated with reader response to reading poetry. We argued that basic emotions as proposed by psychologists (such as Ekman and Plutchik) that are often used in emotion analysis from text are of little use for the annotation of poetry reception. We instead conceptualized aesthetic emotion labels and showed that a closely supervised annotation task results in substantial agreement—in terms of $\\kappa$ score—on the final dataset.", "The task of collecting reader-perceived emotion response to poetry in a crowdsourcing setting is not straightforward. In contrast to expert annotators, who were closely supervised and reflected upon the task, the annotators on crowdsourcing platforms are difficult to control and may lack necessary background knowledge to perform the task at hand. However, using a larger number of crowd annotators may lead to finding an aggregation strategy with a better trade-off between quality and quantity of adjudicated labels. For future work, we thus propose to repeat the experiment with larger number of crowdworkers, and develop an improved training strategy that would suit the crowdsourcing environment.", "The dataset presented in this paper can be of use for different application scenarios, including multi-label emotion classification, style-conditioned poetry generation, investigating the influence of rhythm/prosodic features on emotion, or analysis of authors, genres and diachronic variation (e.g., how emotions are represented differently in certain periods).", "Further, though our modeling experiments are still rudimentary, we propose that this data set can be used to investigate the intra-poem relations either through multi-task learning BIBREF49 and/or with the help of hierarchical sequence classification approaches." ], [ "A special thanks goes to Gesine Fuhrmann, who created the guidelines and tirelessly documented the annotation progress. Also thanks to Annika Palm and Debby Trzeciak who annotated and gave lively feedback. For help with the conceptualization of labels we thank Ines Schindler. This research has been partially conducted within the CRETA center (http://www.creta.uni-stuttgart.de/) which is funded by the German Ministry for Education and Research (BMBF) and partially funded by the German Research Council (DFG), projects SEAT (Structured Multi-Domain Emotion Analysis from Text, KL 2869/1-1). This work has also been supported by the German Research Foundation as part of the Research Training Group Adaptive Preparation of Information from Heterogeneous Sources (AIPHES) at the Technische Universität Darmstadt under grant No. GRK 1994/1." ], [ "We illustrate two examples of our German gold standard annotation, a poem each by Friedrich Hölderlin and Georg Trakl, and an English poem by Walt Whitman. Hölderlin's text stands out, because the mood changes starkly from the first stanza to the second, from Beauty/Joy to Sadness. Trakl's text is a bit more complex with bits of Nostalgia and, most importantly, a mixture of Uneasiness with Awe/Sublime. Whitman's poem is an example of Vitality and its mixing with Sadness. The English annotation was unified by us for space constraints. For the full annotation please see https://github.com/tnhaider/poetry-emotion/" ], [ "" ], [ "" ], [ "" ] ] }
{ "question": [ "Does the paper report macro F1?", "How is the annotation experiment evaluated?", "What are the aesthetic emotions formalized?" ], "question_id": [ "3a9d391d25cde8af3334ac62d478b36b30079d74", "8d8300d88283c73424c8f301ad9fdd733845eb47", "48b12eb53e2d507343f19b8a667696a39b719807" ], "nlp_background": [ "two", "two", "two" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar" ], "paper_read": [ "no", "no", "no" ], "search_query": [ "German", "German", "German" ], "question_writer": [ "486a870694ba60f1a1e7e4ec13e328164cd4b43c", "486a870694ba60f1a1e7e4ec13e328164cd4b43c", "486a870694ba60f1a1e7e4ec13e328164cd4b43c" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": true, "free_form_answer": "", "evidence": [ "FLOAT SELECTED: Table 7: Recall and precision scores of the best model (dbmdz) for each emotion on the test set. ‘Support’ signifies the number of labels." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 7: Recall and precision scores of the best model (dbmdz) for each emotion on the test set. ‘Support’ signifies the number of labels." ] }, { "unanswerable": false, "extractive_spans": [], "yes_no": true, "free_form_answer": "", "evidence": [ "We find that the multilingual model cannot handle infrequent categories, i.e., Awe/Sublime, Suspense and Humor. However, increasing the dataset with English data improves the results, suggesting that the classification would largely benefit from more annotated data. The best model overall is DBMDZ (.520), showing a balanced response on both validation and test set. See Table TABREF37 for a breakdown of all emotions as predicted by the this model. Precision is mostly higher than recall. The labels Awe/Sublime, Suspense and Humor are harder to predict than the other labels.", "FLOAT SELECTED: Table 7: Recall and precision scores of the best model (dbmdz) for each emotion on the test set. ‘Support’ signifies the number of labels." ], "highlighted_evidence": [ "See Table TABREF37 for a breakdown of all emotions as predicted by the this model.", "FLOAT SELECTED: Table 7: Recall and precision scores of the best model (dbmdz) for each emotion on the test set. ‘Support’ signifies the number of labels." ] } ], "annotation_id": [ "0220672a84e5d828ec90f8ee65ab39414cd170f7", "bac3f916c426a5809d910072100fdf12ad3fc30d" ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "confusion matrices of labels between annotators" ], "yes_no": null, "free_form_answer": "", "evidence": [ "We find that Cohen $\\kappa$ agreement ranges from .84 for Uneasiness in the English data, .81 for Humor and Nostalgia, down to German Suspense (.65), Awe/Sublime (.61) and Vitality for both languages (.50 English, .63 German). Both annotators have a similar emotion frequency profile, where the ranking is almost identical, especially for German. However, for English, Annotator 2 annotates more Vitality than Uneasiness. Figure FIGREF18 shows the confusion matrices of labels between annotators as heatmaps. Notably, Beauty/Joy and Sadness are confused across annotators more often than other labels. This is topical for poetry, and therefore not surprising: One might argue that the beauty of beings and situations is only beautiful because it is not enduring and therefore not to divorce from the sadness of the vanishing of beauty BIBREF48. We also find considerable confusion of Sadness with Awe/Sublime and Vitality, while the latter is also regularly confused with Beauty/Joy." ], "highlighted_evidence": [ "Figure FIGREF18 shows the confusion matrices of labels between annotators as heatmaps." ] } ], "annotation_id": [ "218914b3ebf4fe7a1026f109cf02b0c3e37905b6" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "feelings of suspense experienced in narratives not only respond to the trajectory of the plot's content, but are also directly predictive of aesthetic liking (or disliking)", "Emotions that exhibit this dual capacity have been defined as “aesthetic emotions”" ], "yes_no": null, "free_form_answer": "", "evidence": [ "To emotionally move readers is considered a prime goal of literature since Latin antiquity BIBREF1, BIBREF2, BIBREF3. Deeply moved readers shed tears or get chills and goosebumps even in lab settings BIBREF4. In cases like these, the emotional response actually implies an aesthetic evaluation: narratives that have the capacity to move readers are evaluated as good and powerful texts for this very reason. Similarly, feelings of suspense experienced in narratives not only respond to the trajectory of the plot's content, but are also directly predictive of aesthetic liking (or disliking). Emotions that exhibit this dual capacity have been defined as “aesthetic emotions” BIBREF2. Contrary to the negativity bias of classical emotion catalogues, emotion terms used for aesthetic evaluation purposes include far more positive than negative emotions. At the same time, many overall positive aesthetic emotions encompass negative or mixed emotional ingredients BIBREF2, e.g., feelings of suspense include both hopeful and fearful anticipations." ], "highlighted_evidence": [ "Deeply moved readers shed tears or get chills and goosebumps even in lab settings BIBREF4. In cases like these, the emotional response actually implies an aesthetic evaluation: narratives that have the capacity to move readers are evaluated as good and powerful texts for this very reason. Similarly, feelings of suspense experienced in narratives not only respond to the trajectory of the plot's content, but are also directly predictive of aesthetic liking (or disliking). Emotions that exhibit this dual capacity have been defined as “aesthetic emotions” BIBREF2." ] } ], "annotation_id": [ "1d2fb096ab206ab6e9b50087134e1ef663a855d1" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ] }
1705.09665
Community Identity and User Engagement in a Multi-Community Landscape
A community's identity defines and shapes its internal dynamics. Our current understanding of this interplay is mostly limited to glimpses gathered from isolated studies of individual communities. In this work we provide a systematic exploration of the nature of this relation across a wide variety of online communities. To this end we introduce a quantitative, language-based typology reflecting two key aspects of a community's identity: how distinctive, and how temporally dynamic it is. By mapping almost 300 Reddit communities into the landscape induced by this typology, we reveal regularities in how patterns of user engagement vary with the characteristics of a community. Our results suggest that the way new and existing users engage with a community depends strongly and systematically on the nature of the collective identity it fosters, in ways that are highly consequential to community maintainers. For example, communities with distinctive and highly dynamic identities are more likely to retain their users. However, such niche communities also exhibit much larger acculturation gaps between existing users and newcomers, which potentially hinder the integration of the latter. More generally, our methodology reveals differences in how various social phenomena manifest across communities, and shows that structuring the multi-community landscape can lead to a better understanding of the systematic nature of this diversity.
{ "section_name": [ "Introduction", "A typology of community identity", "Overview and intuition", "Language-based formalization", "Community-level measures", "Applying the typology to Reddit", "Community identity and user retention", "Community-type and monthly retention", "Community-type and user tenure", "Community identity and acculturation", "Community identity and content affinity", "Further related work", "Conclusion and future work", "Acknowledgements" ], "paragraphs": [ [ "“If each city is like a game of chess, the day when I have learned the rules, I shall finally possess my empire, even if I shall never succeed in knowing all the cities it contains.”", "", "— Italo Calvino, Invisible Cities", "A community's identity—defined through the common interests and shared experiences of its users—shapes various facets of the social dynamics within it BIBREF0 , BIBREF1 , BIBREF2 . Numerous instances of this interplay between a community's identity and social dynamics have been extensively studied in the context of individual online communities BIBREF3 , BIBREF4 , BIBREF5 . However, the sheer variety of online platforms complicates the task of generalizing insights beyond these isolated, single-community glimpses. A new way to reason about the variation across multiple communities is needed in order to systematically characterize the relationship between properties of a community and the dynamics taking place within.", "One especially important component of community dynamics is user engagement. We can aim to understand why users join certain communities BIBREF6 , what factors influence user retention BIBREF7 , and how users react to innovation BIBREF5 . While striking patterns of user engagement have been uncovered in prior case studies of individual communities BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , we do not know whether these observations hold beyond these cases, or when we can draw analogies between different communities. Are there certain types of communities where we can expect similar or contrasting engagement patterns?", "To address such questions quantitatively we need to provide structure to the diverse and complex space of online communities. Organizing the multi-community landscape would allow us to both characterize individual points within this space, and reason about systematic variations in patterns of user engagement across the space.", "Present work: Structuring the multi-community space. In order to systematically understand the relationship between community identityand user engagement we introduce a quantitative typology of online communities. Our typology is based on two key aspects of community identity: how distinctive—or niche—a community's interests are relative to other communities, and how dynamic—or volatile—these interests are over time. These axes aim to capture the salience of a community's identity and dynamics of its temporal evolution.", "Our main insight in implementing this typology automatically and at scale is that the language used within a community can simultaneously capture how distinctive and dynamic its interests are. This language-based approach draws on a wealth of literature characterizing linguistic variation in online communities and its relationship to community and user identity BIBREF16 , BIBREF5 , BIBREF17 , BIBREF18 , BIBREF19 . Basing our typology on language is also convenient since it renders our framework immediately applicable to a wide variety of online communities, where communication is primarily recorded in a textual format.", "Using our framework, we map almost 300 Reddit communities onto the landscape defined by the two axes of our typology (Section SECREF2 ). We find that this mapping induces conceptually sound categorizations that effectively capture key aspects of community-level social dynamics. In particular, we quantitatively validate the effectiveness of our mapping by showing that our two-dimensional typology encodes signals that are predictive of community-level rates of user retention, complementing strong activity-based features.", "Engagement and community identity. We apply our framework to understand how two important aspects of user engagement in a community—the community's propensity to retain its users (Section SECREF3 ), and its permeability to new members (Section SECREF4 )—vary according to the type of collective identity it fosters. We find that communities that are characterized by specialized, constantly-updating content have higher user retention rates, but also exhibit larger linguistic gaps that separate newcomers from established members.", "More closely examining factors that could contribute to this linguistic gap, we find that especially within distinctive communities, established users have an increased propensity to engage with the community's specialized content, compared to newcomers (Section SECREF5 ). Interestingly, while established members of distinctive communities more avidly respond to temporal updates than newcomers, in more generic communities it is the outsiders who engage more with volatile content, perhaps suggesting that such content may serve as an entry-point to the community (but not necessarily a reason to stay). Such insights into the relation between collective identity and user engagement can be informative to community maintainers seeking to better understand growth patterns within their online communities.", "More generally, our methodology stands as an example of how sociological questions can be addressed in a multi-community setting. In performing our analyses across a rich variety of communities, we reveal both the diversity of phenomena that can occur, as well as the systematic nature of this diversity." ], [ "A community's identity derives from its members' common interests and shared experiences BIBREF15 , BIBREF20 . In this work, we structure the multi-community landscape along these two key dimensions of community identity: how distinctive a community's interests are, and how dynamic the community is over time.", "We now proceed to outline our quantitative typology, which maps communities along these two dimensions. We start by providing an intuition through inspecting a few example communities. We then introduce a generalizable language-based methodology and use it to map a large set of Reddit communities onto the landscape defined by our typology of community identity." ], [ "In order to illustrate the diversity within the multi-community space, and to provide an intuition for the underlying structure captured by the proposed typology, we first examine a few example communities and draw attention to some key social dynamics that occur within them.", "We consider four communities from Reddit: in Seahawks, fans of the Seahawks football team gather to discuss games and players; in BabyBumps, expecting mothers trade advice and updates on their pregnancy; Cooking consists of recipe ideas and general discussion about cooking; while in pics, users share various images of random things (like eels and hornets). We note that these communities are topically contrasting and foster fairly disjoint user bases. Additionally, these communities exhibit varied patterns of user engagement. While Seahawks maintains a devoted set of users from month to month, pics is dominated by transient users who post a few times and then depart.", "Discussions within these communities also span varied sets of interests. Some of these interests are more specific to the community than others: risotto, for example, is seldom a discussion point beyond Cooking. Additionally, some interests consistently recur, while others are specific to a particular time: kitchens are a consistent focus point for cooking, but mint is only in season during spring. Coupling specificity and consistency we find interests such as easter, which isn't particularly specific to BabyBumps but gains prominence in that community around Easter (see Figure FIGREF3 .A for further examples).", "These specific interests provide a window into the nature of the communities' interests as a whole, and by extension their community identities. Overall, discussions in Cooking focus on topics which are highly distinctive and consistently recur (like risotto). In contrast, discussions in Seahawks are highly dynamic, rapidly shifting over time as new games occur and players are traded in and out. In the remainder of this section we formally introduce a methodology for mapping communities in this space defined by their distinctiveness and dynamicity (examples in Figure FIGREF3 .B)." ], [ "Our approach follows the intuition that a distinctive community will use language that is particularly specific, or unique, to that community. Similarly, a dynamic community will use volatile language that rapidly changes across successive windows of time. To capture this intuition automatically, we start by defining word-level measures of specificity and volatility. We then extend these word-level primitives to characterize entire comments, and the community itself.", "Our characterizations of words in a community are motivated by methodology from prior literature that compares the frequency of a word in a particular setting to its frequency in some background distribution, in order to identify instances of linguistic variation BIBREF21 , BIBREF19 . Our particular framework makes this comparison by way of pointwise mutual information (PMI).", "In the following, we use INLINEFORM0 to denote one community within a set INLINEFORM1 of communities, and INLINEFORM2 to denote one time period within the entire history INLINEFORM3 of INLINEFORM4 . We account for temporal as well as inter-community variation by computing word-level measures for each time period of each community's history, INLINEFORM5 . Given a word INLINEFORM6 used within a particular community INLINEFORM7 at time INLINEFORM8 , we define two word-level measures:", "Specificity. We quantify the specificity INLINEFORM0 of INLINEFORM1 to INLINEFORM2 by calculating the PMI of INLINEFORM3 and INLINEFORM4 , relative to INLINEFORM5 , INLINEFORM6 ", "where INLINEFORM0 is INLINEFORM1 's frequency in INLINEFORM2 . INLINEFORM3 is specific to INLINEFORM4 if it occurs more frequently in INLINEFORM5 than in the entire set INLINEFORM6 , hence distinguishing this community from the rest. A word INLINEFORM7 whose occurrence is decoupled from INLINEFORM8 , and thus has INLINEFORM9 close to 0, is said to be generic.", "We compute values of INLINEFORM0 for each time period INLINEFORM1 in INLINEFORM2 ; in the above description we drop the time-based subscripts for clarity.", "Volatility. We quantify the volatility INLINEFORM0 of INLINEFORM1 to INLINEFORM2 as the PMI of INLINEFORM3 and INLINEFORM4 relative to INLINEFORM5 , the entire history of INLINEFORM6 : INLINEFORM7 ", "A word INLINEFORM0 is volatile at time INLINEFORM1 in INLINEFORM2 if it occurs more frequently at INLINEFORM3 than in the entire history INLINEFORM4 , behaving as a fad within a small window of time. A word that occurs with similar frequency across time, and hence has INLINEFORM5 close to 0, is said to be stable.", "Extending to utterances. Using our word-level primitives, we define the specificity of an utterance INLINEFORM0 in INLINEFORM1 , INLINEFORM2 as the average specificity of each word in the utterance. The volatility of utterances is defined analogously.", "" ], [ "Having described these word-level measures, we now proceed to establish the primary axes of our typology:", "Distinctiveness. A community with a very distinctive identity will tend to have distinctive interests, expressed through specialized language. Formally, we define the distinctiveness of a community INLINEFORM0 as the average specificity of all utterances in INLINEFORM1 . We refer to a community with a less distinctive identity as being generic.", "Dynamicity. A highly dynamic community constantly shifts interests from one time window to another, and these temporal variations are reflected in its use of volatile language. Formally, we define the dynamicity of a community INLINEFORM0 as the average volatility of all utterances in INLINEFORM1 . We refer to a community whose language is relatively consistent throughout time as being stable.", "In our subsequent analyses, we focus mostly on examing the average distinctiveness and dynamicity of a community over time, denoted INLINEFORM0 and INLINEFORM1 ." ], [ "We now explain how our typology can be applied to the particular setting of Reddit, and describe the overall behaviour of our linguistic axes in this context.", "Dataset description. Reddit is a popular website where users form and participate in discussion-based communities called subreddits. Within these communities, users post content—such as images, URLs, or questions—which often spark vibrant lengthy discussions in thread-based comment sections.", "The website contains many highly active subreddits with thousands of active subscribers. These communities span an extremely rich variety of topical interests, as represented by the examples described earlier. They also vary along a rich multitude of structural dimensions, such as the number of users, the amount of conversation and social interaction, and the social norms determining which types of content become popular. The diversity and scope of Reddit's multicommunity ecosystem make it an ideal landscape in which to closely examine the relation between varying community identities and social dynamics.", "Our full dataset consists of all subreddits on Reddit from January 2013 to December 2014, for which there are at least 500 words in the vocabulary used to estimate our measures, in at least 4 months of the subreddit's history. We compute our measures over the comments written by users in a community in time windows of months, for each sufficiently active month, and manually remove communities where the bulk of the contributions are in a foreign language. This results in 283 communities ( INLINEFORM0 ), for a total of 4,872 community-months ( INLINEFORM1 ).", "Estimating linguistic measures. We estimate word frequencies INLINEFORM0 , and by extension each downstream measure, in a carefully controlled manner in order to ensure we capture robust and meaningful linguistic behaviour. First, we only consider top-level comments which are initial responses to a post, as the content of lower-level responses might reflect conventions of dialogue more than a community's high-level interests. Next, in order to prevent a few highly active users from dominating our frequency estimates, we count each unique word once per user, ignoring successive uses of the same word by the same user. This ensures that our word-level characterizations are not skewed by a small subset of highly active contributors.", "In our subsequent analyses, we will only look at these measures computed over the nouns used in comments. In principle, our framework can be applied to any choice of vocabulary. However, in the case of Reddit using nouns provides a convenient degree of interpretability. We can easily understand the implication of a community preferentially mentioning a noun such as gamer or feminist, but interpreting the overuse of verbs or function words such as take or of is less straightforward. Additionally, in focusing on nouns we adopt the view emphasized in modern “third wave” accounts of sociolinguistic variation, that stylistic variation is inseparable from topical content BIBREF23 . In the case of online communities, the choice of what people choose to talk about serves as a primary signal of social identity. That said, a typology based on more purely stylistic differences is an interesting avenue for future work.", "Accounting for rare words. One complication when using measures such as PMI, which are based off of ratios of frequencies, is that estimates for very infrequent words could be overemphasized BIBREF24 . Words that only appear a few times in a community tend to score at the extreme ends of our measures (e.g. as highly specific or highly generic), obfuscating the impact of more frequent words in the community. To address this issue, we discard the long tail of infrequent words in our analyses, using only the top 5th percentile of words, by frequency within each INLINEFORM0 , to score comments and communities.", "Typology output on Reddit. The distribution of INLINEFORM0 and INLINEFORM1 across Reddit communities is shown in Figure FIGREF3 .B, along with examples of communities at the extremes of our typology. We find that interpretable groupings of communities emerge at various points within our axes. For instance, highly distinctive and dynamic communities tend to focus on rapidly-updating interests like sports teams and games, while generic and consistent communities tend to be large “link-sharing” hubs where users generally post content with no clear dominating themes. More examples of communities at the extremes of our typology are shown in Table TABREF9 .", "We note that these groupings capture abstract properties of a community's content that go beyond its topic. For instance, our typology relates topically contrasting communities such as yugioh (which is about a popular trading card game) and Seahawks through the shared trait that their content is particularly distinctive. Additionally, the axes can clarify differences between topically similar communities: while startrek and thewalkingdead both focus on TV shows, startrek is less dynamic than the median community, while thewalkingdead is among the most dynamic communities, as the show was still airing during the years considered." ], [ "We have seen that our typology produces qualitatively satisfying groupings of communities according to the nature of their collective identity. This section shows that there is an informative and highly predictive relationship between a community's position in this typology and its user engagement patterns. We find that communities with distinctive and dynamic identities have higher rates of user engagement, and further show that a community's position in our identity-based landscape holds important predictive information that is complementary to a strong activity baseline.", "In particular user retention is one of the most crucial aspects of engagement and is critical to community maintenance BIBREF2 . We quantify how successful communities are at retaining users in terms of both short and long-term commitment. Our results indicate that rates of user retention vary drastically, yet systematically according to how distinctive and dynamic a community is (Figure FIGREF3 ).", "We find a strong, explanatory relationship between the temporal consistency of a community's identity and rates of user engagement: dynamic communities that continually update and renew their discussion content tend to have far higher rates of user engagement. The relationship between distinctiveness and engagement is less universal, but still highly informative: niche communities tend to engender strong, focused interest from users at one particular point in time, though this does not necessarily translate into long-term retention." ], [ "We find that dynamic communities, such as Seahawks or starcraft, have substantially higher rates of monthly user retention than more stable communities (Spearman's INLINEFORM0 = 0.70, INLINEFORM1 0.001, computed with community points averaged over months; Figure FIGREF11 .A, left). Similarly, more distinctive communities, like Cooking and Naruto, exhibit moderately higher monthly retention rates than more generic communities (Spearman's INLINEFORM2 = 0.33, INLINEFORM3 0.001; Figure FIGREF11 .A, right).", "Monthly retention is formally defined as the proportion of users who contribute in month INLINEFORM0 and then return to contribute again in month INLINEFORM1 . Each monthly datapoint is treated as unique and the trends in Figure FIGREF11 show 95% bootstrapped confidence intervals, cluster-resampled at the level of subreddit BIBREF25 , to account for differences in the number of months each subreddit contributes to the data.", "Importantly, we find that in the task of predicting community-level user retention our identity-based typology holds additional predictive value on top of strong baseline features based on community-size (# contributing users) and activity levels (mean # contributions per user), which are commonly used for churn prediction BIBREF7 . We compared out-of-sample predictive performance via leave-one-community-out cross validation using random forest regressors with ensembles of size 100, and otherwise default hyperparameters BIBREF26 . A model predicting average monthly retention based on a community's average distinctiveness and dynamicity achieves an average mean squared error ( INLINEFORM0 ) of INLINEFORM1 and INLINEFORM2 , while an analogous model predicting based on a community's size and average activity level (both log-transformed) achieves INLINEFORM4 and INLINEFORM5 . The difference between the two models is not statistically significant ( INLINEFORM6 , Wilcoxon signed-rank test). However, combining features from both models results in a large and statistically significant improvement over each independent model ( INLINEFORM7 , INLINEFORM8 , INLINEFORM9 Bonferroni-corrected pairwise Wilcoxon tests). These results indicate that our typology can explain variance in community-level retention rates, and provides information beyond what is present in standard activity-based features." ], [ "As with monthly retention, we find a strong positive relationship between a community's dynamicity and the average number of months that a user will stay in that community (Spearman's INLINEFORM0 = 0.41, INLINEFORM1 0.001, computed over all community points; Figure FIGREF11 .B, left). This verifies that the short-term trend observed for monthly retention translates into longer-term engagement and suggests that long-term user retention might be strongly driven by the extent to which a community continually provides novel content. Interestingly, there is no significant relationship between distinctiveness and long-term engagement (Spearman's INLINEFORM2 = 0.03, INLINEFORM3 0.77; Figure FIGREF11 .B, right). Thus, while highly distinctive communities like RandomActsOfMakeup may generate focused commitment from users over a short period of time, such communities are unlikely to retain long-term users unless they also have sufficiently dynamic content.", "To measure user tenures we focused on one slice of data (May, 2013) and measured how many months a user spends in each community, on average—the average number of months between a user's first and last comment in each community. We have activity data up until May 2015, so the maximum tenure is 24 months in this set-up, which is exceptionally long relative to the average community member (throughout our entire data less than INLINEFORM0 of users have tenures of more than 24 months in any community)." ], [ "The previous section shows that there is a strong connection between the nature of a community's identity and its basic user engagement patterns. In this section, we probe the relationship between a community's identity and how permeable, or accessible, it is to outsiders.", "We measure this phenomenon using what we call the acculturation gap, which compares the extent to which engaged vs. non-engaged users employ community-specific language. While previous work has found this gap to be large and predictive of future user engagement in two beer-review communities BIBREF5 , we find that the size of the acculturation gap depends strongly on the nature of a community's identity, with the gap being most pronounced in stable, highly distinctive communities (Figure FIGREF13 ).", "This finding has important implications for our understanding of online communities. Though many works have analyzed the dynamics of “linguistic belonging” in online communities BIBREF16 , BIBREF28 , BIBREF5 , BIBREF17 , our results suggest that the process of linguistically fitting in is highly contingent on the nature of a community's identity. At one extreme, in generic communities like pics or worldnews there is no distinctive, linguistic identity for users to adopt.", "To measure the acculturation gap for a community, we follow Danescu-Niculescu-Mizil et al danescu-niculescu-mizilno2013 and build “snapshot language models” (SLMs) for each community, which capture the linguistic state of a community at one point of time. Using these language models we can capture how linguistically close a particular utterance is to the community by measuring the cross-entropy of this utterance relative to the SLM: DISPLAYFORM0 ", "where INLINEFORM0 is the probability assigned to bigram INLINEFORM1 from comment INLINEFORM2 in community-month INLINEFORM3 . We build the SLMs by randomly sampling 200 active users—defined as users with at least 5 comments in the respective community and month. For each of these 200 active users we select 5 random 10-word spans from 5 unique comments. To ensure robustness and maximize data efficiency, we construct 100 SLMs for each community-month pair that has enough data, bootstrap-resampling from the set of active users.", "We compute a basic measure of the acculturation gap for a community-month INLINEFORM0 as the relative difference of the cross-entropy of comments by users active in INLINEFORM1 with that of singleton comments by outsiders—i.e., users who only ever commented once in INLINEFORM2 , but who are still active in Reddit in general: DISPLAYFORM0 ", " INLINEFORM0 denotes the distribution over singleton comments, INLINEFORM1 denotes the distribution over comments from users active in INLINEFORM2 , and INLINEFORM3 the expected values of the cross-entropy over these respective distributions. For each bootstrap-sampled SLM we compute the cross-entropy of 50 comments by active users (10 comments from 5 randomly sampled active users, who were not used to construct the SLM) and 50 comments from randomly-sampled outsiders.", "Figure FIGREF13 .A shows that the acculturation gap varies substantially with how distinctive and dynamic a community is. Highly distinctive communities have far higher acculturation gaps, while dynamicity exhibits a non-linear relationship: relatively stable communities have a higher linguistic entry barrier', as do very dynamic ones. Thus, in communities like IAmA (a general Q&A forum) that are very generic, with content that is highly, but not extremely dynamic, outsiders are at no disadvantage in matching the community's language. In contrast, the acculturation gap is large in stable, distinctive communities like Cooking that have consistent community-specific language. The gap is also large in extremely dynamic communities like Seahawks, which perhaps require more attention or interest on the part of active users to keep up-to-date with recent trends in content.", "These results show that phenomena like the acculturation gap, which were previously observed in individual communities BIBREF28 , BIBREF5 , cannot be easily generalized to a larger, heterogeneous set of communities. At the same time, we see that structuring the space of possible communities enables us to observe systematic patterns in how such phenomena vary." ], [ "Through the acculturation gap, we have shown that communities exhibit large yet systematic variations in their permeability to outsiders. We now turn to understanding the divide in commenting behaviour between outsiders and active community members at a finer granularity, by focusing on two particular ways in which such gaps might manifest among users: through different levels of engagement with specific content and with temporally volatile content.", "Echoing previous results, we find that community type mediates the extent and nature of the divide in content affinity. While in distinctive communities active members have a higher affinity for both community-specific content and for highly volatile content, the opposite is true for generic communities, where it is the outsiders who engage more with volatile content.", "We quantify these divides in content affinity by measuring differences in the language of the comments written by active users and outsiders. Concretely, for each community INLINEFORM0 , we define the specificity gap INLINEFORM1 as the relative difference between the average specificity of comments authored by active members, and by outsiders, where these measures are macroaveraged over users. Large, positive INLINEFORM2 then occur in communities where active users tend to engage with substantially more community-specific content than outsiders.", "We analogously define the volatility gap INLINEFORM0 as the relative difference in volatilities of active member and outsider comments. Large, positive values of INLINEFORM1 characterize communities where active users tend to have more volatile interests than outsiders, while negative values indicate communities where active users tend to have more stable interests.", "We find that in 94% of communities, INLINEFORM0 , indicating (somewhat unsurprisingly) that in almost all communities, active users tend to engage with more community-specific content than outsiders. However, the magnitude of this divide can vary greatly: for instance, in Homebrewing, which is dedicated to brewing beer, the divide is very pronounced ( INLINEFORM1 0.33) compared to funny, a large hub where users share humorous content ( INLINEFORM2 0.011).", "The nature of the volatility gap is comparatively more varied. In Homebrewing ( INLINEFORM0 0.16), as in 68% of communities, active users tend to write more volatile comments than outsiders ( INLINEFORM1 0). However, communities like funny ( INLINEFORM2 -0.16), where active users contribute relatively stable comments compared to outsiders ( INLINEFORM3 0), are also well-represented on Reddit.", "To understand whether these variations manifest systematically across communities, we examine the relationship between divides in content affinity and community type. In particular, following the intuition that active users have a relatively high affinity for a community's niche, we expect that the distinctiveness of a community will be a salient mediator of specificity and volatility gaps. Indeed, we find a strong correlation between a community's distinctiveness and its specificity gap (Spearman's INLINEFORM0 0.34, INLINEFORM1 0.001).", "We also find a strong correlation between distinctiveness and community volatility gaps (Spearman's INLINEFORM0 0.53, INLINEFORM1 0.001). In particular, we see that among the most distinctive communities (i.e., the top third of communities by distinctiveness), active users tend to write more volatile comments than outsiders (mean INLINEFORM2 0.098), while across the most generic communities (i.e., the bottom third), active users tend to write more stable comments (mean INLINEFORM3 -0.047, Mann-Whitney U test INLINEFORM4 0.001). The relative affinity of outsiders for volatile content in these communities indicates that temporally ephemeral content might serve as an entry point into such a community, without necessarily engaging users in the long term." ], [ "Our language-based typology and analysis of user engagement draws on and contributes to several distinct research threads, in addition to the many foundational studies cited in the previous sections.", "Multicommunity studies. Our investigation of user engagement in multicommunity settings follows prior literature which has examined differences in user and community dynamics across various online groups, such as email listservs. Such studies have primarily related variations in user behaviour to structural features such as group size and volume of content BIBREF30 , BIBREF31 , BIBREF32 , BIBREF33 . In focusing on the linguistic content of communities, we extend this research by providing a content-based framework through which user engagement can be examined.", "Reddit has been a particularly useful setting for studying multiple communities in prior work. Such studies have mostly focused on characterizing how individual users engage across a multi-community platform BIBREF34 , BIBREF35 , or on specific user engagement patterns such as loyalty to particular communities BIBREF22 . We complement these studies by seeking to understand how features of communities can mediate a broad array of user engagement patterns within them.", "Typologies of online communities. Prior attempts to typologize online communities have primarily been qualitative and based on hand-designed categories, making them difficult to apply at scale. These typologies often hinge on having some well-defined function the community serves, such as supporting a business or non-profit cause BIBREF36 , which can be difficult or impossible to identify in massive, anonymous multi-community settings. Other typologies emphasize differences in communication platforms and other functional requirements BIBREF37 , BIBREF38 , which are important but preclude analyzing differences between communities within the same multi-community platform. Similarly, previous computational methods of characterizing multiple communities have relied on the presence of markers such as affixes in community names BIBREF35 , or platform-specific affordances such as evaluation mechanisms BIBREF39 .", "Our typology is also distinguished from community detection techniques that rely on structural or functional categorizations BIBREF40 , BIBREF41 . While the focus of those studies is to identify and characterize sub-communities within a larger social network, our typology provides a characterization of pre-defined communities based on the nature of their identity.", "Broader work on collective identity. Our focus on community identity dovetails with a long line of research on collective identity and user engagement, in both online and offline communities BIBREF42 , BIBREF1 , BIBREF2 . These studies focus on individual-level psychological manifestations of collective (or social) identity, and their relationship to user engagement BIBREF42 , BIBREF43 , BIBREF44 , BIBREF0 .", "In contrast, we seek to characterize community identities at an aggregate level and in an interpretable manner, with the goal of systematically organizing the diverse space of online communities. Typologies of this kind are critical to these broader, social-psychological studies of collective identity: they allow researchers to systematically analyze how the psychological manifestations and implications of collective identity vary across diverse sets of communities." ], [ "Our current understanding of engagement patterns in online communities is patched up from glimpses offered by several disparate studies focusing on a few individual communities. This work calls into attention the need for a method to systematically reason about similarities and differences across communities. By proposing a way to structure the multi-community space, we find not only that radically contrasting engagement patterns emerge in different parts of this space, but also that this variation can be at least partly explained by the type of identity each community fosters.", "Our choice in this work is to structure the multi-community space according to a typology based on community identity, as reflected in language use. We show that this effectively explains cross-community variation of three different user engagement measures—retention, acculturation and content affinity—and complements measures based on activity and size with additional interpretable information. For example, we find that in niche communities established members are more likely to engage with volatile content than outsiders, while the opposite is true in generic communities. Such insights can be useful for community maintainers seeking to understand engagement patterns in their own communities.", "One main area of future research is to examine the temporal dynamics in the multi-community landscape. By averaging our measures of distinctiveness and dynamicity across time, our present study treated community identity as a static property. However, as communities experience internal changes and respond to external events, we can expect the nature of their identity to shift as well. For instance, the relative consistency of harrypotter may be disrupted by the release of a new novel, while Seahawks may foster different identities during and between football seasons. Conversely, a community's type may also mediate the impact of new events. Moving beyond a static view of community identity could enable us to better understand how temporal phenomena such as linguistic change manifest across different communities, and also provide a more nuanced view of user engagement—for instance, are communities more welcoming to newcomers at certain points in their lifecycle?", "Another important avenue of future work is to explore other ways of mapping the landscape of online communities. For example, combining structural properties of communities BIBREF40 with topical information BIBREF35 and with our identity-based measures could further characterize and explain variations in user engagement patterns. Furthermore, extending the present analyses to even more diverse communities supported by different platforms (e.g., GitHub, StackExchange, Wikipedia) could enable the characterization of more complex behavioral patterns such as collaboration and altruism, which become salient in different multicommunity landscapes." ], [ "The authors thank Liye Fu, Jack Hessel, David Jurgens and Lillian Lee for their helpful comments. This research has been supported in part by a Discovery and Innovation Research Seed Award from the Office of the Vice Provost for Research at Cornell, NSF CNS-1010921, IIS-1149837, IIS-1514268 NIH BD2K, ARO MURI, DARPA XDATA, DARPA SIMPLEX, DARPA NGS2, Stanford Data Science Initiative, SAP Stanford Graduate Fellowship, NSERC PGS-D, Boeing, Lightspeed, and Volkswagen. " ] ] }
1908.06606
Question Answering based Clinical Text Structuring Using Pre-trained Language Model
Clinical text structuring is a critical and fundamental task for clinical research. Traditional methods such as taskspecific end-to-end models and pipeline models usually suffer from the lack of dataset and error propagation. In this paper, we present a question answering based clinical text structuring (QA-CTS) task to unify different specific tasks and make dataset shareable. A novel model that aims to introduce domain-specific features (e.g., clinical named entity information) into pre-trained language model is also proposed for QA-CTS task. Experimental results on Chinese pathology reports collected from Ruijing Hospital demonstrate our presented QA-CTS task is very effective to improve the performance on specific tasks. Our proposed model also competes favorably with strong baseline models in specific tasks.
{ "section_name": [ "Introduction", "Related Work ::: Clinical Text Structuring", "Related Work ::: Pre-trained Language Model", "Question Answering based Clinical Text Structuring", "The Proposed Model for QA-CTS Task", "The Proposed Model for QA-CTS Task ::: Contextualized Representation of Sentence Text and Query Text", "The Proposed Model for QA-CTS Task ::: Clinical Named Entity Information", "The Proposed Model for QA-CTS Task ::: Integration Method", "The Proposed Model for QA-CTS Task ::: Final Prediction", "The Proposed Model for QA-CTS Task ::: Two-Stage Training Mechanism", "Experimental Studies", "Experimental Studies ::: Dataset and Evaluation Metrics", "Experimental Studies ::: Experimental Settings", "Experimental Studies ::: Comparison with State-of-the-art Methods", "Experimental Studies ::: Ablation Analysis", "Experimental Studies ::: Comparisons Between Two Integration Methods", "Experimental Studies ::: Data Integration Analysis", "Conclusion", "Acknowledgment" ], "paragraphs": [ [ "Clinical text structuring (CTS) is a critical task for fetching medical research data from electronic health records (EHRs), where structural patient medical data, such as whether the patient has specific symptoms, diseases, or what the tumor size is, how far from the tumor is cut at during the surgery, or what the specific laboratory test result is, are obtained. It is important to extract structured data from clinical text because bio-medical systems or bio-medical researches greatly rely on structured data but they cannot obtain them directly. In addition, clinical text often contains abundant healthcare information. CTS is able to provide large-scale extracted structured data for enormous down-stream clinical researches.", "However, end-to-end CTS is a very challenging task. Different CTS tasks often have non-uniform output formats, such as specific-class classifications (e.g. tumor stage), strings in the original text (e.g. result for a laboratory test) and inferred values from part of the original text (e.g. calculated tumor size). Researchers have to construct different models for it, which is already costly, and hence it calls for a lot of labeled data for each model. Moreover, labeling necessary amount of data for training neural network requires expensive labor cost. To handle it, researchers turn to some rule-based structuring methods which often have lower labor cost.", "Traditionally, CTS tasks can be addressed by rule and dictionary based methods BIBREF0, BIBREF1, BIBREF2, task-specific end-to-end methods BIBREF3, BIBREF4, BIBREF5, BIBREF6 and pipeline methods BIBREF7, BIBREF8, BIBREF9. Rule and dictionary based methods suffer from costly human-designed extraction rules, while task-specific end-to-end methods have non-uniform output formats and require task-specific training dataset. Pipeline methods break down the entire process into several pieces which improves the performance and generality. However, when the pipeline depth grows, error propagation will have a greater impact on the performance.", "To reduce the pipeline depth and break the barrier of non-uniform output formats, we present a question answering based clinical text structuring (QA-CTS) task (see Fig. FIGREF1). Unlike the traditional CTS task, our QA-CTS task aims to discover the most related text from original paragraph text. For some cases, it is already the final answer in deed (e.g., extracting sub-string). While for other cases, it needs several steps to obtain the final answer, such as entity names conversion and negative words recognition. Our presented QA-CTS task unifies the output format of the traditional CTS task and make the training data shareable, thus enriching the training data. The main contributions of this work can be summarized as follows.", "We first present a question answering based clinical text structuring (QA-CTS) task, which unifies different specific tasks and make dataset shareable. We also propose an effective model to integrate clinical named entity information into pre-trained language model.", "Experimental results show that QA-CTS task leads to significant improvement due to shared dataset. Our proposed model also achieves significantly better performance than the strong baseline methods. In addition, we also show that two-stage training mechanism has a great improvement on QA-CTS task.", "The rest of the paper is organized as follows. We briefly review the related work on clinical text structuring in Section SECREF2. Then, we present question answer based clinical text structuring task in Section SECREF3. In Section SECREF4, we present an effective model for this task. Section SECREF5 is devoted to computational studies and several investigations on the key issues of our proposed model. Finally, conclusions are given in Section SECREF6." ], [ "Clinical text structuring is a final problem which is highly related to practical applications. Most of existing studies are case-by-case. Few of them are developed for the general purpose structuring task. These studies can be roughly divided into three categories: rule and dictionary based methods, task-specific end-to-end methods and pipeline methods.", "Rule and dictionary based methods BIBREF0, BIBREF1, BIBREF2 rely extremely on heuristics and handcrafted extraction rules which is more of an art than a science and incurring extensive trial-and-error experiments. Fukuda et al. BIBREF0 identified protein names from biological papers by dictionaries and several features of protein names. Wang et al. BIBREF1 developed some linguistic rules (i.e. normalised/expanded term matching and substring term matching) to map specific terminology to SNOMED CT. Song et al. BIBREF2 proposed a hybrid dictionary-based bio-entity extraction technique and expands the bio-entity dictionary by combining different data sources and improves the recall rate through the shortest path edit distance algorithm. This kind of approach features its interpretability and easy modifiability. However, with the increase of the rule amount, supplementing new rules to existing system will turn to be a rule disaster.", "Task-specific end-to-end methods BIBREF3, BIBREF4 use large amount of data to automatically model the specific task. Topaz et al. BIBREF3 constructed an automated wound information identification model with five output. Tan et al. BIBREF4 identified patients undergoing radical cystectomy for bladder cancer. Although they achieved good performance, none of their models could be used to another task due to output format difference. This makes building a new model for a new task a costly job.", "Pipeline methods BIBREF7, BIBREF8, BIBREF9 break down the entire task into several basic natural language processing tasks. Bill et al. BIBREF7 focused on attributes extraction which mainly relied on dependency parsing and named entity recognition BIBREF10, BIBREF11, BIBREF12. Meanwhile, Fonferko et al. BIBREF9 used more components like noun phrase chunking BIBREF13, BIBREF14, BIBREF15, part-of-speech tagging BIBREF16, BIBREF17, BIBREF18, sentence splitter, named entity linking BIBREF19, BIBREF20, BIBREF21, relation extraction BIBREF22, BIBREF23. This kind of method focus on language itself, so it can handle tasks more general. However, as the depth of pipeline grows, it is obvious that error propagation will be more and more serious. In contrary, using less components to decrease the pipeline depth will lead to a poor performance. So the upper limit of this method depends mainly on the worst component." ], [ "Recently, some works focused on pre-trained language representation models to capture language information from text and then utilizing the information to improve the performance of specific natural language processing tasks BIBREF24, BIBREF25, BIBREF26, BIBREF27 which makes language model a shared model to all natural language processing tasks. Radford et al. BIBREF24 proposed a framework for fine-tuning pre-trained language model. Peters et al. BIBREF25 proposed ELMo which concatenates forward and backward language models in a shallow manner. Devlin et al. BIBREF26 used bidirectional Transformers to model deep interactions between the two directions. Yang et al. BIBREF27 replaced the fixed forward or backward factorization order with all possible permutations of the factorization order and avoided using the [MASK] tag which causes pretrain-finetune discrepancy that BERT is subject to.", "The main motivation of introducing pre-trained language model is to solve the shortage of labeled data and polysemy problem. Although polysemy problem is not a common phenomenon in biomedical domain, shortage of labeled data is always a non-trivial problem. Lee et al. BIBREF28 applied BERT on large-scale biomedical unannotated data and achieved improvement on biomedical named entity recognition, relation extraction and question answering. Kim et al. BIBREF29 adapted BioBERT into multi-type named entity recognition and discovered new entities. Both of them demonstrates the usefulness of introducing pre-trained language model into biomedical domain." ], [ "Given a sequence of paragraph text $X=<x_1, x_2, ..., x_n>$, clinical text structuring (CTS) can be regarded to extract or generate a key-value pair where key $Q$ is typically a query term such as proximal resection margin and value $V$ is a result of query term $Q$ according to the paragraph text $X$.", "Generally, researchers solve CTS problem in two steps. Firstly, the answer-related text is pick out. And then several steps such as entity names conversion and negative words recognition are deployed to generate the final answer. While final answer varies from task to task, which truly causes non-uniform output formats, finding the answer-related text is a common action among all tasks. Traditional methods regard both the steps as a whole. In this paper, we focus on finding the answer-related substring $Xs = <X_i, X_i+1, X_i+2, ... X_j> (1 <= i < j <= n)$ from paragraph text $X$. For example, given sentence UTF8gkai“远端胃切除标本：小弯长11.5cm，大弯长17.0cm。距上切端6.0cm、下切端8.0cm\" (Distal gastrectomy specimen: measuring 11.5cm in length along the lesser curvature, 17.0cm in length along the greater curvature; 6.0cm from the proximal resection margin, and 8.0cm from the distal resection margin) and query UTF8gkai“上切缘距离\"(proximal resection margin), the answer should be 6.0cm which is located in original text from index 32 to 37. With such definition, it unifies the output format of CTS tasks and therefore make the training data shareable, in order to reduce the training data quantity requirement.", "Since BERT BIBREF26 has already demonstrated the usefulness of shared model, we suppose extracting commonality of this problem and unifying the output format will make the model more powerful than dedicated model and meanwhile, for a specific clinical task, use the data for other tasks to supplement the training data." ], [ "In this section, we present an effective model for the question answering based clinical text structuring (QA-CTS). As shown in Fig. FIGREF8, paragraph text $X$ is first passed to a clinical named entity recognition (CNER) model BIBREF12 to capture named entity information and obtain one-hot CNER output tagging sequence for query text $I_{nq}$ and paragraph text $I_{nt}$ with BIEOS (Begin, Inside, End, Outside, Single) tag scheme. $I_{nq}$ and $I_{nt}$ are then integrated together into $I_n$. Meanwhile, the paragraph text $X$ and query text $Q$ are organized and passed to contextualized representation model which is pre-trained language model BERT BIBREF26 here to obtain the contextualized representation vector $V_s$ of both text and query. Afterwards, $V_s$ and $I_n$ are integrated together and fed into a feed forward network to calculate the start and end index of answer-related text. Here we define this calculation problem as a classification for each word to be the start or end word." ], [ "For any clinical free-text paragraph $X$ and query $Q$, contextualized representation is to generate the encoded vector of both of them. Here we use pre-trained language model BERT-base BIBREF26 model to capture contextual information.", "The text input is constructed as [CLS] $Q$ [SEP] $X$ [SEP]'. For Chinese sentence, each word in this input will be mapped to a pre-trained embedding $e_i$. To tell the model $Q$ and $X$ is two different sentence, a sentence type input is generated which is a binary label sequence to denote what sentence each character in the input belongs to. Positional encoding and mask matrix is also constructed automatically to bring in absolute position information and eliminate the impact of zero padding respectively. Then a hidden vector $V_s$ which contains both query and text information is generated through BERT-base model." ], [ "Since BERT is trained on general corpus, its performance on biomedical domain can be improved by introducing biomedical domain-specific features. In this paper, we introduce clinical named entity information into the model.", "The CNER task aims to identify and classify important clinical terms such as diseases, symptoms, treatments, exams, and body parts from Chinese EHRs. It can be regarded as a sequence labeling task. A CNER model typically outputs a sequence of tags. Each character of the original sentence will be tagged a label following a tag scheme. In this paper we recognize the entities by the model of our previous work BIBREF12 but trained on another corpus which has 44 entity types including operations, numbers, unit words, examinations, symptoms, negative words, etc. An illustrative example of named entity information sequence is demonstrated in Table TABREF2. In Table TABREF2, UTF8gkai“远端胃切除\" is tagged as an operation, 11.5' is a number word and cm' is an unit word. The named entity tag sequence is organized in one-hot type. We denote the sequence for clinical sentence and query term as $I_{nt}$ and $I_{nq}$, respectively." ], [ "There are two ways to integrate two named entity information vectors $I_{nt}$ and $I_{nq}$ or hidden contextualized representation $V_s$ and named entity information $I_n$, where $I_n = [I_{nt}; I_{nq}]$. The first one is to concatenate them together because they have sequence output with a common dimension. The second one is to transform them into a new hidden representation. For the concatenation method, the integrated representation is described as follows.", "While for the transformation method, we use multi-head attention BIBREF30 to encode the two vectors. It can be defined as follows where $h$ is the number of heads and $W_o$ is used to projects back the dimension of concatenated matrix.", "$Attention$ denotes the traditional attention and it can be defined as follows.", "where $d_k$ is the length of hidden vector." ], [ "The final step is to use integrated representation $H_i$ to predict the start and end index of answer-related text. Here we define this calculation problem as a classification for each word to be the start or end word. We use a feed forward network (FFN) to compress and calculate the score of each word $H_f$ which makes the dimension to $\\left\\langle l_s, 2\\right\\rangle$ where $l_s$ denotes the length of sequence.", "Then we permute the two dimensions for softmax calculation. The calculation process of loss function can be defined as followed.", "where $O_s = softmax(permute(H_f)_0)$ denotes the probability score of each word to be the start word and similarly $O_e = softmax(permute(H_f)_1)$ denotes the end. $y_s$ and $y_e$ denotes the true answer of the output for start word and end word respectively." ], [ "Two-stage training mechanism is previously applied on bilinear model in fine-grained visual recognition BIBREF31, BIBREF32, BIBREF33. Two CNNs are deployed in the model. One is trained at first for coarse-graind features while freezing the parameter of the other. Then unfreeze the other one and train the entire model in a low learning rate for fetching fine-grained features.", "Inspired by this and due to the large amount of parameters in BERT model, to speed up the training process, we fine tune the BERT model with new prediction layer first to achieve a better contextualized representation performance. Then we deploy the proposed model and load the fine tuned BERT weights, attach named entity information layers and retrain the model." ], [ "In this section, we devote to experimentally evaluating our proposed task and approach. The best results in tables are in bold." ], [ "Our dataset is annotated based on Chinese pathology reports provided by the Department of Gastrointestinal Surgery, Ruijin Hospital. It contains 17,833 sentences, 826,987 characters and 2,714 question-answer pairs. All question-answer pairs are annotated and reviewed by four clinicians with three types of questions, namely tumor size, proximal resection margin and distal resection margin. These annotated instances have been partitioned into 1,899 training instances (12,412 sentences) and 815 test instances (5,421 sentences). Each instance has one or several sentences. Detailed statistics of different types of entities are listed in Table TABREF20.", "In the following experiments, two widely-used performance measures (i.e., EM-score BIBREF34 and (macro-averaged) F$_1$-score BIBREF35) are used to evaluate the methods. The Exact Match (EM-score) metric measures the percentage of predictions that match any one of the ground truth answers exactly. The F$_1$-score metric is a looser metric measures the average overlap between the prediction and ground truth answer." ], [ "To implement deep neural network models, we utilize the Keras library BIBREF36 with TensorFlow BIBREF37 backend. Each model is run on a single NVIDIA GeForce GTX 1080 Ti GPU. The models are trained by Adam optimization algorithm BIBREF38 whose parameters are the same as the default settings except for learning rate set to $5\\times 10^{-5}$. Batch size is set to 3 or 4 due to the lack of graphical memory. We select BERT-base as the pre-trained language model in this paper. Due to the high cost of pre-training BERT language model, we directly adopt parameters pre-trained by Google in Chinese general corpus. The named entity recognition is applied on both pathology report texts and query texts." ], [ "Since BERT has already achieved the state-of-the-art performance of question-answering, in this section we compare our proposed model with state-of-the-art question answering models (i.e. QANet BIBREF39) and BERT-Base BIBREF26. As BERT has two versions: BERT-Base and BERT-Large, due to the lack of computational resource, we can only compare with BERT-Base model instead of BERT-Large. Prediction layer is attached at the end of the original BERT-Base model and we fine tune it on our dataset. In this section, the named entity integration method is chosen to pure concatenation (Concatenate the named entity information on pathology report text and query text first and then concatenate contextualized representation and concatenated named entity information). Comparative results are summarized in Table TABREF23.", "Table TABREF23 indicates that our proposed model achieved the best performance both in EM-score and F$_1$-score with EM-score of 91.84% and F$_1$-score of 93.75%. QANet outperformed BERT-Base with 3.56% score in F$_1$-score but underperformed it with 0.75% score in EM-score. Compared with BERT-Base, our model led to a 5.64% performance improvement in EM-score and 3.69% in F$_1$-score. Although our model didn't outperform much with QANet in F$_1$-score (only 0.13%), our model significantly outperformed it with 6.39% score in EM-score." ], [ "To further investigate the effects of named entity information and two-stage training mechanism for our model, we apply ablation analysis to see the improvement brought by each of them, where $\\times$ refers to removing that part from our model.", "As demonstrated in Table TABREF25, with named entity information enabled, two-stage training mechanism improved the result by 4.36% in EM-score and 3.8% in F$_1$-score. Without two-stage training mechanism, named entity information led to an improvement by 1.28% in EM-score but it also led to a weak deterioration by 0.12% in F$_1$-score. With both of them enabled, our proposed model achieved a 5.64% score improvement in EM-score and a 3.69% score improvement in F$_1$-score. The experimental results show that both named entity information and two-stage training mechanism are helpful to our model." ], [ "There are two methods to integrate named entity information into existing model, we experimentally compare these two integration methods. As named entity recognition has been applied on both pathology report text and query text, there will be two integration here. One is for two named entity information and the other is for contextualized representation and integrated named entity information. For multi-head attention BIBREF30, we set heads number $h = 16$ with 256-dimension hidden vector size for each head.", "From Table TABREF27, we can observe that applying concatenation on both periods achieved the best performance on both EM-score and F$_1$-score. Unfortunately, applying multi-head attention on both period one and period two can not reach convergence in our experiments. This probably because it makes the model too complex to train. The difference on other two methods are the order of concatenation and multi-head attention. Applying multi-head attention on two named entity information $I_{nt}$ and $I_{nq}$ first achieved a better performance with 89.87% in EM-score and 92.88% in F$_1$-score. Applying Concatenation first can only achieve 80.74% in EM-score and 84.42% in F$_1$-score. This is probably due to the processing depth of hidden vectors and dataset size. BERT's output has been modified after many layers but named entity information representation is very close to input. With big amount of parameters in multi-head attention, it requires massive training to find out the optimal parameters. However, our dataset is significantly smaller than what pre-trained BERT uses. This probably can also explain why applying multi-head attention method on both periods can not converge.", "Although Table TABREF27 shows the best integration method is concatenation, multi-head attention still has great potential. Due to the lack of computational resources, our experiment fixed the head number and hidden vector size. However, tuning these hyper parameters may have impact on the result. Tuning integration method and try to utilize larger datasets may give help to improving the performance." ], [ "To investigate how shared task and shared model can benefit, we split our dataset by query types, train our proposed model with different datasets and demonstrate their performance on different datasets. Firstly, we investigate the performance on model without two-stage training and named entity information.", "As indicated in Table TABREF30, The model trained by mixed data outperforms 2 of the 3 original tasks in EM-score with 81.55% for proximal resection margin and 86.85% for distal resection margin. The performance on tumor size declined by 1.57% score in EM-score and 3.14% score in F$_1$-score but they were still above 90%. 0.69% and 0.37% score improvement in EM-score was brought by shared model for proximal and distal resection margin prediction. Meanwhile F$_1$-score for those two tasks declined 3.11% and 0.77% score.", "Then we investigate the performance on model with two-stage training and named entity information. In this experiment, pre-training process only use the specific dataset not the mixed data. From Table TABREF31 we can observe that the performance on proximal and distal resection margin achieved the best performance on both EM-score and F$_1$-score. Compared with Table TABREF30, the best performance on proximal resection margin improved by 6.9% in EM-score and 7.94% in F$_1$-score. Meanwhile, the best performance on distal resection margin improved by 5.56% in EM-score and 6.32% in F$_1$-score. Other performances also usually improved a lot. This proves the usefulness of two-stage training and named entity information as well.", "Lastly, we fine tune the model for each task with a pre-trained parameter. Table TABREF32 summarizes the result. (Add some explanations for the Table TABREF32). Comparing Table TABREF32 with Table TABREF31, using mixed-data pre-trained parameters can significantly improve the model performance than task-specific data trained model. Except tumor size, the result was improved by 0.52% score in EM-score, 1.39% score in F$_1$-score for proximal resection margin and 2.6% score in EM-score, 2.96% score in F$_1$-score for distal resection margin. This proves mixed-data pre-trained parameters can lead to a great benefit for specific task. Meanwhile, the model performance on other tasks which are not trained in the final stage was also improved from around 0 to 60 or 70 percent. This proves that there is commonality between different tasks and our proposed QA-CTS task make this learnable. In conclusion, to achieve the best performance for a specific dataset, pre-training the model in multiple datasets and then fine tuning the model on the specific dataset is the best way." ], [ "In this paper, we present a question answering based clinical text structuring (QA-CTS) task, which unifies different clinical text structuring tasks and utilize different datasets. A novel model is also proposed to integrate named entity information into a pre-trained language model and adapt it to QA-CTS task. Initially, sequential results of named entity recognition on both paragraph and query texts are integrated together. Contextualized representation on both paragraph and query texts are transformed by a pre-trained language model. Then, the integrated named entity information and contextualized representation are integrated together and fed into a feed forward network for final prediction. Experimental results on real-world dataset demonstrate that our proposed model competes favorably with strong baseline models in all three specific tasks. The shared task and shared model introduced by QA-CTS task has also been proved to be useful for improving the performance on most of the task-specific datasets. In conclusion, the best way to achieve the best performance for a specific dataset is to pre-train the model in multiple datasets and then fine tune it on the specific dataset." ], [ "We would like to thank Ting Li and Xizhou Hong (Ruijin Hospital) who have helped us very much in data fetching and data cleansing. This work is supported by the National Key R&D Program of China for “Precision Medical Research\" (No. 2018YFC0910500)." ] ] }
{ "question": [ "What data is the language model pretrained on?", "What baselines is the proposed model compared against?", "How is the clinical text structuring task defined?", "What are the specific tasks being unified?", "Is all text in this dataset a question, or are there unrelated sentences in between questions?", "How many questions are in the dataset?", "What is the perWhat are the tasks evaluated?", "Are there privacy concerns with clinical data?", "How they introduce domain-specific features into pre-trained language model?", "How big is QA-CTS task dataset?", "How big is dataset of pathology reports collected from Ruijing Hospital?", "What are strong baseline models in specific tasks?" ], "question_id": [ "71a7153e12879defa186bfb6dbafe79c74265e10", "85d1831c28d3c19c84472589a252e28e9884500f", "1959e0ebc21fafdf1dd20c6ea054161ba7446f61", "77cf4379106463b6ebcb5eb8fa5bb25450fa5fb8", "06095a4dee77e9a570837b35fc38e77228664f91", "19c9cfbc4f29104200393e848b7b9be41913a7ac", "6743c1dd7764fc652cfe2ea29097ea09b5544bc3", "14323046220b2aea8f15fba86819cbccc389ed8b", "08a5f8d36298b57f6a4fcb4b6ae5796dc5d944a4", "975a4ac9773a4af551142c324b64a0858670d06e", "326e08a0f5753b90622902bd4a9c94849a24b773", "bd78483a746fda4805a7678286f82d9621bc45cf" ], "nlp_background": [ "infinity", "infinity", "infinity", "infinity", "five", "five", "five", "five", "zero", "zero", "zero", "zero" ], "topic_background": [ "research", "research", "research", "research", "familiar", "familiar", "familiar", "familiar", "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar" ], "paper_read": [ "no", "no", "no", "no", "no", "no", "no", "no", "no", "no", "no", "no" ], "search_query": [ "question answering", "question answering", "question answering", "question answering", "Question Answering", "Question Answering", "Question Answering", "Question Answering", "", "", "", "" ], "question_writer": [ "ecca0cede84b7af8a918852311d36346b07f0668", "ecca0cede84b7af8a918852311d36346b07f0668", "ecca0cede84b7af8a918852311d36346b07f0668", "ecca0cede84b7af8a918852311d36346b07f0668", "2a18a3656984d04249f100633e4c1003417a2255", "2a18a3656984d04249f100633e4c1003417a2255", "2a18a3656984d04249f100633e4c1003417a2255", "2a18a3656984d04249f100633e4c1003417a2255", "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [ "Chinese general corpus" ], "yes_no": null, "free_form_answer": "", "evidence": [ "To implement deep neural network models, we utilize the Keras library BIBREF36 with TensorFlow BIBREF37 backend. Each model is run on a single NVIDIA GeForce GTX 1080 Ti GPU. The models are trained by Adam optimization algorithm BIBREF38 whose parameters are the same as the default settings except for learning rate set to $5\\times 10^{-5}$. Batch size is set to 3 or 4 due to the lack of graphical memory. We select BERT-base as the pre-trained language model in this paper. Due to the high cost of pre-training BERT language model, we directly adopt parameters pre-trained by Google in Chinese general corpus. The named entity recognition is applied on both pathology report texts and query texts." ], "highlighted_evidence": [ "Due to the high cost of pre-training BERT language model, we directly adopt parameters pre-trained by Google in Chinese general corpus. The named entity recognition is applied on both pathology report texts and query texts." ] }, { "unanswerable": true, "extractive_spans": [], "yes_no": null, "free_form_answer": "", "evidence": [], "highlighted_evidence": [] } ], "annotation_id": [ "0ab604dbe114dba174da645cc06a713e12a1fd9d", "1f1495d06d0abe86ee52124ec9f2f0b25a536147" ], "worker_id": [ "a0b403873302db7cada39008f04d01155ef68f4f", "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "BERT-Base", "QANet" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Experimental Studies ::: Comparison with State-of-the-art Methods", "Since BERT has already achieved the state-of-the-art performance of question-answering, in this section we compare our proposed model with state-of-the-art question answering models (i.e. QANet BIBREF39) and BERT-Base BIBREF26. As BERT has two versions: BERT-Base and BERT-Large, due to the lack of computational resource, we can only compare with BERT-Base model instead of BERT-Large. Prediction layer is attached at the end of the original BERT-Base model and we fine tune it on our dataset. In this section, the named entity integration method is chosen to pure concatenation (Concatenate the named entity information on pathology report text and query text first and then concatenate contextualized representation and concatenated named entity information). Comparative results are summarized in Table TABREF23." ], "highlighted_evidence": [ "Experimental Studies ::: Comparison with State-of-the-art Methods\nSince BERT has already achieved the state-of-the-art performance of question-answering, in this section we compare our proposed model with state-of-the-art question answering models (i.e. QANet BIBREF39) and BERT-Base BIBREF26. As BERT has two versions: BERT-Base and BERT-Large, due to the lack of computational resource, we can only compare with BERT-Base model instead of BERT-Large." ] }, { "unanswerable": false, "extractive_spans": [ "QANet BIBREF39", "BERT-Base BIBREF26" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Since BERT has already achieved the state-of-the-art performance of question-answering, in this section we compare our proposed model with state-of-the-art question answering models (i.e. QANet BIBREF39) and BERT-Base BIBREF26. As BERT has two versions: BERT-Base and BERT-Large, due to the lack of computational resource, we can only compare with BERT-Base model instead of BERT-Large. Prediction layer is attached at the end of the original BERT-Base model and we fine tune it on our dataset. In this section, the named entity integration method is chosen to pure concatenation (Concatenate the named entity information on pathology report text and query text first and then concatenate contextualized representation and concatenated named entity information). Comparative results are summarized in Table TABREF23.", "FLOAT SELECTED: TABLE III COMPARATIVE RESULTS BETWEEN BERT AND OUR PROPOSED MODEL" ], "highlighted_evidence": [ "Since BERT has already achieved the state-of-the-art performance of question-answering, in this section we compare our proposed model with state-of-the-art question answering models (i.e. QANet BIBREF39) and BERT-Base BIBREF26. As BERT has two versions: BERT-Base and BERT-Large, due to the lack of computational resource, we can only compare with BERT-Base model instead of BERT-Large. ", "FLOAT SELECTED: TABLE III COMPARATIVE RESULTS BETWEEN BERT AND OUR PROPOSED MODEL" ] } ], "annotation_id": [ "0de2087bf0e46b14042de2a6e707bbf544a04556", "c14d9acff1d3e6f47901e7104a7f01a10a727050" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "a0b403873302db7cada39008f04d01155ef68f4f" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "Clinical text structuring (CTS) is a critical task for fetching medical research data from electronic health records (EHRs), where structural patient medical data, such as whether the patient has specific symptoms, diseases, or what the tumor size is, how far from the tumor is cut at during the surgery, or what the specific laboratory test result is, are obtained.", "Unlike the traditional CTS task, our QA-CTS task aims to discover the most related text from original paragraph text. " ], "yes_no": null, "free_form_answer": "", "evidence": [ "FLOAT SELECTED: Fig. 1. An illustrative example of QA-CTS task.", "Clinical text structuring (CTS) is a critical task for fetching medical research data from electronic health records (EHRs), where structural patient medical data, such as whether the patient has specific symptoms, diseases, or what the tumor size is, how far from the tumor is cut at during the surgery, or what the specific laboratory test result is, are obtained. It is important to extract structured data from clinical text because bio-medical systems or bio-medical researches greatly rely on structured data but they cannot obtain them directly. In addition, clinical text often contains abundant healthcare information. CTS is able to provide large-scale extracted structured data for enormous down-stream clinical researches.", "To reduce the pipeline depth and break the barrier of non-uniform output formats, we present a question answering based clinical text structuring (QA-CTS) task (see Fig. FIGREF1). Unlike the traditional CTS task, our QA-CTS task aims to discover the most related text from original paragraph text. For some cases, it is already the final answer in deed (e.g., extracting sub-string). While for other cases, it needs several steps to obtain the final answer, such as entity names conversion and negative words recognition. Our presented QA-CTS task unifies the output format of the traditional CTS task and make the training data shareable, thus enriching the training data. The main contributions of this work can be summarized as follows." ], "highlighted_evidence": [ "FLOAT SELECTED: Fig. 1. An illustrative example of QA-CTS task.", "Clinical text structuring (CTS) is a critical task for fetching medical research data from electronic health records (EHRs), where structural patient medical data, such as whether the patient has specific symptoms, diseases, or what the tumor size is, how far from the tumor is cut at during the surgery, or what the specific laboratory test result is, are obtained. It is important to extract structured data from clinical text because bio-medical systems or bio-medical researches greatly rely on structured data but they cannot obtain them directly. In addition, clinical text often contains abundant healthcare information. CTS is able to provide large-scale extracted structured data for enormous down-stream clinical researches.", "Unlike the traditional CTS task, our QA-CTS task aims to discover the most related text from original paragraph text. For some cases, it is already the final answer in deed (e.g., extracting sub-string). While for other cases, it needs several steps to obtain the final answer, such as entity names conversion and negative words recognition. Our presented QA-CTS task unifies the output format of the traditional CTS task and make the training data shareable, thus enriching the training data. " ] }, { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "CTS is extracting structural data from medical research data (unstructured). Authors define QA-CTS task that aims to discover most related text from original text.", "evidence": [ "Clinical text structuring (CTS) is a critical task for fetching medical research data from electronic health records (EHRs), where structural patient medical data, such as whether the patient has specific symptoms, diseases, or what the tumor size is, how far from the tumor is cut at during the surgery, or what the specific laboratory test result is, are obtained. It is important to extract structured data from clinical text because bio-medical systems or bio-medical researches greatly rely on structured data but they cannot obtain them directly. In addition, clinical text often contains abundant healthcare information. CTS is able to provide large-scale extracted structured data for enormous down-stream clinical researches.", "However, end-to-end CTS is a very challenging task. Different CTS tasks often have non-uniform output formats, such as specific-class classifications (e.g. tumor stage), strings in the original text (e.g. result for a laboratory test) and inferred values from part of the original text (e.g. calculated tumor size). Researchers have to construct different models for it, which is already costly, and hence it calls for a lot of labeled data for each model. Moreover, labeling necessary amount of data for training neural network requires expensive labor cost. To handle it, researchers turn to some rule-based structuring methods which often have lower labor cost.", "To reduce the pipeline depth and break the barrier of non-uniform output formats, we present a question answering based clinical text structuring (QA-CTS) task (see Fig. FIGREF1). Unlike the traditional CTS task, our QA-CTS task aims to discover the most related text from original paragraph text. For some cases, it is already the final answer in deed (e.g., extracting sub-string). While for other cases, it needs several steps to obtain the final answer, such as entity names conversion and negative words recognition. Our presented QA-CTS task unifies the output format of the traditional CTS task and make the training data shareable, thus enriching the training data. The main contributions of this work can be summarized as follows." ], "highlighted_evidence": [ "Clinical text structuring (CTS) is a critical task for fetching medical research data from electronic health records (EHRs), where structural patient medical data, such as whether the patient has specific symptoms, diseases, or what the tumor size is, how far from the tumor is cut at during the surgery, or what the specific laboratory test result is, are obtained. It is important to extract structured data from clinical text because bio-medical systems or bio-medical researches greatly rely on structured data but they cannot obtain them directly.", "However, end-to-end CTS is a very challenging task. Different CTS tasks often have non-uniform output formats, such as specific-class classifications (e.g. tumor stage), strings in the original text (e.g. result for a laboratory test) and inferred values from part of the original text (e.g. calculated tumor size).", "To reduce the pipeline depth and break the barrier of non-uniform output formats, we present a question answering based clinical text structuring (QA-CTS) task (see Fig. FIGREF1). Unlike the traditional CTS task, our QA-CTS task aims to discover the most related text from original paragraph text." ] } ], "annotation_id": [ "6d56080358bb7f22dd764934ffcd6d4e93fef0b2", "da233cce57e642941da2446d3e053349c2ab1a15" ], "worker_id": [ "a0b403873302db7cada39008f04d01155ef68f4f", "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ " three types of questions, namely tumor size, proximal resection margin and distal resection margin" ], "yes_no": null, "free_form_answer": "", "evidence": [ "To reduce the pipeline depth and break the barrier of non-uniform output formats, we present a question answering based clinical text structuring (QA-CTS) task (see Fig. FIGREF1). Unlike the traditional CTS task, our QA-CTS task aims to discover the most related text from original paragraph text. For some cases, it is already the final answer in deed (e.g., extracting sub-string). While for other cases, it needs several steps to obtain the final answer, such as entity names conversion and negative words recognition. Our presented QA-CTS task unifies the output format of the traditional CTS task and make the training data shareable, thus enriching the training data. The main contributions of this work can be summarized as follows.", "Our dataset is annotated based on Chinese pathology reports provided by the Department of Gastrointestinal Surgery, Ruijin Hospital. It contains 17,833 sentences, 826,987 characters and 2,714 question-answer pairs. All question-answer pairs are annotated and reviewed by four clinicians with three types of questions, namely tumor size, proximal resection margin and distal resection margin. These annotated instances have been partitioned into 1,899 training instances (12,412 sentences) and 815 test instances (5,421 sentences). Each instance has one or several sentences. Detailed statistics of different types of entities are listed in Table TABREF20.", "In this paper, we present a question answering based clinical text structuring (QA-CTS) task, which unifies different clinical text structuring tasks and utilize different datasets. A novel model is also proposed to integrate named entity information into a pre-trained language model and adapt it to QA-CTS task. Initially, sequential results of named entity recognition on both paragraph and query texts are integrated together. Contextualized representation on both paragraph and query texts are transformed by a pre-trained language model. Then, the integrated named entity information and contextualized representation are integrated together and fed into a feed forward network for final prediction. Experimental results on real-world dataset demonstrate that our proposed model competes favorably with strong baseline models in all three specific tasks. The shared task and shared model introduced by QA-CTS task has also been proved to be useful for improving the performance on most of the task-specific datasets. In conclusion, the best way to achieve the best performance for a specific dataset is to pre-train the model in multiple datasets and then fine tune it on the specific dataset." ], "highlighted_evidence": [ "Our presented QA-CTS task unifies the output format of the traditional CTS task and make the training data shareable, thus enriching the training data.", "All question-answer pairs are annotated and reviewed by four clinicians with three types of questions, namely tumor size, proximal resection margin and distal resection margin. ", "Experimental results on real-world dataset demonstrate that our proposed model competes favorably with strong baseline models in all three specific tasks." ] }, { "unanswerable": true, "extractive_spans": [], "yes_no": null, "free_form_answer": "", "evidence": [], "highlighted_evidence": [] } ], "annotation_id": [ "7138d812ea70084e7610e5a2422039da1404afd7", "b732d5561babcf37393ebf6cbb051d04b0b66bd5" ], "worker_id": [ "a0b403873302db7cada39008f04d01155ef68f4f", "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "the dataset consists of pathology reports including sentences and questions and answers about tumor size and resection margins so it does include additional sentences ", "evidence": [ "Our dataset is annotated based on Chinese pathology reports provided by the Department of Gastrointestinal Surgery, Ruijin Hospital. It contains 17,833 sentences, 826,987 characters and 2,714 question-answer pairs. All question-answer pairs are annotated and reviewed by four clinicians with three types of questions, namely tumor size, proximal resection margin and distal resection margin. These annotated instances have been partitioned into 1,899 training instances (12,412 sentences) and 815 test instances (5,421 sentences). Each instance has one or several sentences. Detailed statistics of different types of entities are listed in Table TABREF20." ], "highlighted_evidence": [ "Our dataset is annotated based on Chinese pathology reports provided by the Department of Gastrointestinal Surgery, Ruijin Hospital. It contains 17,833 sentences, 826,987 characters and 2,714 question-answer pairs. All question-answer pairs are annotated and reviewed by four clinicians with three types of questions, namely tumor size, proximal resection margin and distal resection margin. " ] } ], "annotation_id": [ "1d4d4965fd44fefbfed0b3267ef5875572994b66" ], "worker_id": [ "fa716cd87ce6fd6905e2f23f09b262e90413167f" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "2,714 ", "evidence": [ "Our dataset is annotated based on Chinese pathology reports provided by the Department of Gastrointestinal Surgery, Ruijin Hospital. It contains 17,833 sentences, 826,987 characters and 2,714 question-answer pairs. All question-answer pairs are annotated and reviewed by four clinicians with three types of questions, namely tumor size, proximal resection margin and distal resection margin. These annotated instances have been partitioned into 1,899 training instances (12,412 sentences) and 815 test instances (5,421 sentences). Each instance has one or several sentences. Detailed statistics of different types of entities are listed in Table TABREF20." ], "highlighted_evidence": [ "Our dataset is annotated based on Chinese pathology reports provided by the Department of Gastrointestinal Surgery, Ruijin Hospital. It contains 17,833 sentences, 826,987 characters and 2,714 question-answer pairs." ] } ], "annotation_id": [ "229cc59d1545c9e8f47d43053465e2dfd1b763cc" ], "worker_id": [ "fa716cd87ce6fd6905e2f23f09b262e90413167f" ] }, { "answer": [ { "unanswerable": true, "extractive_spans": [], "yes_no": null, "free_form_answer": "", "evidence": [], "highlighted_evidence": [] } ], "annotation_id": [ "e2fe2a3438f28758724d992502a44615051eda90" ], "worker_id": [ "fa716cd87ce6fd6905e2f23f09b262e90413167f" ] }, { "answer": [ { "unanswerable": true, "extractive_spans": [], "yes_no": null, "free_form_answer": "", "evidence": [], "highlighted_evidence": [] } ], "annotation_id": [ "2a73264b743b6dd183c200f7dcd04aed4029f015" ], "worker_id": [ "fa716cd87ce6fd6905e2f23f09b262e90413167f" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "integrate clinical named entity information into pre-trained language model" ], "yes_no": null, "free_form_answer": "", "evidence": [ "We first present a question answering based clinical text structuring (QA-CTS) task, which unifies different specific tasks and make dataset shareable. We also propose an effective model to integrate clinical named entity information into pre-trained language model.", "In this section, we present an effective model for the question answering based clinical text structuring (QA-CTS). As shown in Fig. FIGREF8, paragraph text $X$ is first passed to a clinical named entity recognition (CNER) model BIBREF12 to capture named entity information and obtain one-hot CNER output tagging sequence for query text $I_{nq}$ and paragraph text $I_{nt}$ with BIEOS (Begin, Inside, End, Outside, Single) tag scheme. $I_{nq}$ and $I_{nt}$ are then integrated together into $I_n$. Meanwhile, the paragraph text $X$ and query text $Q$ are organized and passed to contextualized representation model which is pre-trained language model BERT BIBREF26 here to obtain the contextualized representation vector $V_s$ of both text and query. Afterwards, $V_s$ and $I_n$ are integrated together and fed into a feed forward network to calculate the start and end index of answer-related text. Here we define this calculation problem as a classification for each word to be the start or end word." ], "highlighted_evidence": [ "We also propose an effective model to integrate clinical named entity information into pre-trained language model.", "In this section, we present an effective model for the question answering based clinical text structuring (QA-CTS). As shown in Fig. FIGREF8, paragraph text $X$ is first passed to a clinical named entity recognition (CNER) model BIBREF12 to capture named entity information and obtain one-hot CNER output tagging sequence for query text $I_{nq}$ and paragraph text $I_{nt}$ with BIEOS (Begin, Inside, End, Outside, Single) tag scheme. $I_{nq}$ and $I_{nt}$ are then integrated together into $I_n$. Meanwhile, the paragraph text $X$ and query text $Q$ are organized and passed to contextualized representation model which is pre-trained language model BERT BIBREF26 here to obtain the contextualized representation vector $V_s$ of both text and query. Afterwards, $V_s$ and $I_n$ are integrated together and fed into a feed forward network to calculate the start and end index of answer-related text." ] } ], "annotation_id": [ "5f125408e657282669f90a1866d8227c0f94332e" ], "worker_id": [ "e70d8110563d53282f1a26e823d27e6f235772db" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "17,833 sentences, 826,987 characters and 2,714 question-answer pairs" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Our dataset is annotated based on Chinese pathology reports provided by the Department of Gastrointestinal Surgery, Ruijin Hospital. It contains 17,833 sentences, 826,987 characters and 2,714 question-answer pairs. All question-answer pairs are annotated and reviewed by four clinicians with three types of questions, namely tumor size, proximal resection margin and distal resection margin. These annotated instances have been partitioned into 1,899 training instances (12,412 sentences) and 815 test instances (5,421 sentences). Each instance has one or several sentences. Detailed statistics of different types of entities are listed in Table TABREF20." ], "highlighted_evidence": [ "Our dataset is annotated based on Chinese pathology reports provided by the Department of Gastrointestinal Surgery, Ruijin Hospital. It contains 17,833 sentences, 826,987 characters and 2,714 question-answer pairs. " ] } ], "annotation_id": [ "24c7023a5221b509d34dd6703d6e0607b2777e78" ], "worker_id": [ "e70d8110563d53282f1a26e823d27e6f235772db" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "17,833 sentences, 826,987 characters and 2,714 question-answer pairs" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Our dataset is annotated based on Chinese pathology reports provided by the Department of Gastrointestinal Surgery, Ruijin Hospital. It contains 17,833 sentences, 826,987 characters and 2,714 question-answer pairs. All question-answer pairs are annotated and reviewed by four clinicians with three types of questions, namely tumor size, proximal resection margin and distal resection margin. These annotated instances have been partitioned into 1,899 training instances (12,412 sentences) and 815 test instances (5,421 sentences). Each instance has one or several sentences. Detailed statistics of different types of entities are listed in Table TABREF20." ], "highlighted_evidence": [ "Our dataset is annotated based on Chinese pathology reports provided by the Department of Gastrointestinal Surgery, Ruijin Hospital. It contains 17,833 sentences, 826,987 characters and 2,714 question-answer pairs." ] } ], "annotation_id": [ "d046d9ea83c5ffe607465e2fbc8817131c11e037" ], "worker_id": [ "e70d8110563d53282f1a26e823d27e6f235772db" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "state-of-the-art question answering models (i.e. QANet BIBREF39) and BERT-Base BIBREF26" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Since BERT has already achieved the state-of-the-art performance of question-answering, in this section we compare our proposed model with state-of-the-art question answering models (i.e. QANet BIBREF39) and BERT-Base BIBREF26. As BERT has two versions: BERT-Base and BERT-Large, due to the lack of computational resource, we can only compare with BERT-Base model instead of BERT-Large. Prediction layer is attached at the end of the original BERT-Base model and we fine tune it on our dataset. In this section, the named entity integration method is chosen to pure concatenation (Concatenate the named entity information on pathology report text and query text first and then concatenate contextualized representation and concatenated named entity information). Comparative results are summarized in Table TABREF23." ], "highlighted_evidence": [ "Since BERT has already achieved the state-of-the-art performance of question-answering, in this section we compare our proposed model with state-of-the-art question answering models (i.e. QANet BIBREF39) and BERT-Base BIBREF26. As BERT has two versions: BERT-Base and BERT-Large, due to the lack of computational resource, we can only compare with BERT-Base model instead of BERT-Large." ] } ], "annotation_id": [ "b3a3d6e707a67bab827053b40e446f30e416887f" ], "worker_id": [ "e70d8110563d53282f1a26e823d27e6f235772db" ] } ] }
1811.00942
Progress and Tradeoffs in Neural Language Models
In recent years, we have witnessed a dramatic shift towards techniques driven by neural networks for a variety of NLP tasks. Undoubtedly, neural language models (NLMs) have reduced perplexity by impressive amounts. This progress, however, comes at a substantial cost in performance, in terms of inference latency and energy consumption, which is particularly of concern in deployments on mobile devices. This paper, which examines the quality-performance tradeoff of various language modeling techniques, represents to our knowledge the first to make this observation. We compare state-of-the-art NLMs with"classic"Kneser-Ney (KN) LMs in terms of energy usage, latency, perplexity, and prediction accuracy using two standard benchmarks. On a Raspberry Pi, we find that orders of increase in latency and energy usage correspond to less change in perplexity, while the difference is much less pronounced on a desktop.
{ "section_name": [ "Introduction", "Background and Related Work", "Experimental Setup", "Hyperparameters and Training", "Infrastructure", "Results and Discussion", "Conclusion" ], "paragraphs": [ [ "Deep learning has unquestionably advanced the state of the art in many natural language processing tasks, from syntactic dependency parsing BIBREF0 to named-entity recognition BIBREF1 to machine translation BIBREF2 . The same certainly applies to language modeling, where recent advances in neural language models (NLMs) have led to dramatically better approaches as measured using standard metrics such as perplexity BIBREF3 , BIBREF4 .", "Specifically focused on language modeling, this paper examines an issue that to our knowledge has not been explored: advances in neural language models have come at a significant cost in terms of increased computational complexity. Computing the probability of a token sequence using non-neural techniques requires a number of phrase lookups and perhaps a few arithmetic operations, whereas model inference with NLMs require large matrix multiplications consuming perhaps millions of floating point operations (FLOPs). These performance tradeoffs are worth discussing.", "In truth, language models exist in a quality–performance tradeoff space. As model quality increases (e.g., lower perplexity), performance as measured in terms of energy consumption, query latency, etc. tends to decrease. For applications primarily running in the cloud—say, machine translation—practitioners often solely optimize for the lowest perplexity. This is because such applications are embarrassingly parallel and hence trivial to scale in a data center environment.", "There are, however, applications of NLMs that require less one-sided optimizations. On mobile devices such as smartphones and tablets, for example, NLMs may be integrated into software keyboards for next-word prediction, allowing much faster text entry. Popular Android apps that enthusiastically tout this technology include SwiftKey and Swype. The greater computational costs of NLMs lead to higher energy usage in model inference, translating into shorter battery life.", "In this paper, we examine the quality–performance tradeoff in the shift from non-neural to neural language models. In particular, we compare Kneser–Ney smoothing, widely accepted as the state of the art prior to NLMs, to the best NLMs today. The decrease in perplexity on standard datasets has been well documented BIBREF3 , but to our knowledge no one has examined the performances tradeoffs. With deployment on a mobile device in mind, we evaluate energy usage and inference latency on a Raspberry Pi (which shares the same ARM architecture as nearly all smartphones today). We find that a 2.5 $\\times$ reduction in perplexity on PTB comes at a staggering cost in terms of performance: inference with NLMs takes 49 $\\times$ longer and requires 32 $\\times$ more energy. Furthermore, we find that impressive reductions in perplexity translate into at best modest improvements in next-word prediction, which is arguable a better metric for evaluating software keyboards on a smartphone. The contribution of this paper is the first known elucidation of this quality–performance tradeoff. Note that we refrain from prescriptive recommendations: whether or not a tradeoff is worthwhile depends on the application. Nevertheless, NLP engineers should arguably keep these tradeoffs in mind when selecting a particular operating point." ], [ " BIBREF3 evaluate recent neural language models; however, their focus is not on the computational footprint of each model, but rather the perplexity. To further reduce perplexity, many neural language model extensions exist, such as continuous cache pointer BIBREF5 and mixture of softmaxes BIBREF6 . Since our focus is on comparing “core” neural and non-neural approaches, we disregard these extra optimizations techniques in all of our models.", "Other work focus on designing lightweight models for resource-efficient inference on mobile devices. BIBREF7 explore LSTMs BIBREF8 with binary weights for language modeling; BIBREF9 examine shallow feedforward neural networks for natural language processing.", "AWD-LSTM. BIBREF4 show that a simple three-layer LSTM, with proper regularization and optimization techniques, can achieve state of the art on various language modeling datasets, surpassing more complex models. Specifically, BIBREF4 apply randomized backpropagation through time, variational dropout, activation regularization, embedding dropout, and temporal activation regularization. A novel scheduler for optimization, non-monotonically triggered ASGD (NT-ASGD) is also introduced. BIBREF4 name their three-layer LSTM model trained with such tricks, “AWD-LSTM.”", "Quasi-Recurrent Neural Networks. Quasi-recurrent neural networks (QRNNs; BIBREF10 ) achieve current state of the art in word-level language modeling BIBREF11 . A quasi-recurrent layer comprises two separate parts: a convolution layer with three weights, and a recurrent pooling layer. Given an input $\\mathbf {X} \\in \\mathbb {R}^{k \\times n}$ , the convolution layer is $\n\\mathbf {Z} = \\tanh (\\mathbf {W}_z \\cdot \\mathbf {X})\\\\\n\\mathbf {F} = \\sigma (\\mathbf {W}_f \\cdot \\mathbf {X})\\\\\n\\mathbf {O} = \\sigma (\\mathbf {W}_o \\cdot \\mathbf {X})\n$ ", "where $\\sigma$ denotes the sigmoid function, $\\cdot$ represents masked convolution across time, and $\\mathbf {W}_{\\lbrace z, f, o\\rbrace } \\in \\mathbb {R}^{m \\times k \\times r}$ are convolution weights with $k$ input channels, $m$ output channels, and a window size of $r$ . In the recurrent pooling layer, the convolution outputs are combined sequentially: $\n\\mathbf {c}_t &= \\mathbf {f}_t \\odot \\mathbf {c}_{t-1} + (1 -\n\\mathbf {f}_t) \\odot \\mathbf {z}_t\\\\\n\\mathbf {h}_t &= \\mathbf {o}_t \\odot \\mathbf {c}_t\n$ ", "Multiple QRNN layers can be stacked for deeper hierarchical representation, with the output $\\mathbf {h}_{1:t}$ being fed as the input into the subsequent layer: In language modeling, a four-layer QRNN is a standard architecture BIBREF11 .", "Perplexity–Recall Scale. Word-level perplexity does not have a strictly monotonic relationship with recall-at- $k$ , the fraction of top $k$ predictions that contain the correct word. A given R@ $k$ imposes a weak minimum perplexity constraint—there are many free parameters that allow for large variability in the perplexity given a certain R@ $k$ . Consider the corpus, “choo choo train,” with an associated unigram model $P(\\text{choo''}) = 0.1$ , $P(\\text{train''}) = 0.9$ , resulting in an R@1 of $1/3$ and perplexity of $4.8$ . Clearly, R@1 $=1/3$ for all $P(\\text{choo''}) \\le 0.5$ ; thus, perplexity can drop as low as 2 without affecting recall." ], [ "We conducted our experiments on Penn Treebank (PTB; BIBREF12 ) and WikiText-103 (WT103; BIBREF13 ). Preprocessed by BIBREF14 , PTB contains 887K tokens for training, 70K for validation, and 78K for test, with a vocabulary size of 10,000. On the other hand, WT103 comprises 103 million tokens for training, 217K for validation, and 245K for test, spanning a vocabulary of 267K unique tokens.", "For the neural language model, we used a four-layer QRNN BIBREF10 , which achieves state-of-the-art results on a variety of datasets, such as WT103 BIBREF11 and PTB. To compare against more common LSTM architectures, we also evaluated AWD-LSTM BIBREF4 on PTB. For the non-neural approach, we used a standard five-gram model with modified Kneser-Ney smoothing BIBREF15 , as explored in BIBREF16 on PTB. We denote the QRNN models for PTB and WT103 as ptb-qrnn and wt103-qrnn, respectively.", "For each model, we examined word-level perplexity, R@3 in next-word prediction, latency (ms/q), and energy usage (mJ/q). To explore the perplexity–recall relationship, we collected individual perplexity and recall statistics for each sentence in the test set." ], [ "The QRNN models followed the exact training procedure and architecture delineated in the official codebase from BIBREF11 . For ptb-qrnn, we trained the model for 550 epochs using NT-ASGD BIBREF4 , then finetuned for 300 epochs using ASGD BIBREF17 , all with a learning rate of 30 throughout. For wt103-qrnn, we followed BIBREF11 and trained the QRNN for 14 epochs, using the Adam optimizer with a learning rate of $10^{-3}$ . We also applied regularization techniques from BIBREF4 ; all the specific hyperparameters are the same as those in the repository. Our model architecture consists of 400-dimensional tied embedding weights BIBREF18 and four QRNN layers, with 1550 hidden units per layer on PTB and 2500 per layer on WT103. Both QRNN models have window sizes of $r=2$ for the first layer and $r=1$ for the rest.", "For the KN-5 model, we trained an off-the-shelf five-gram model using the popular SRILM toolkit BIBREF19 . We did not specify any special hyperparameters." ], [ "We trained the QRNNs with PyTorch (0.4.0; commit 1807bac) on a Titan V GPU. To evaluate the models under a resource-constrained environment, we deployed them on a Raspberry Pi 3 (Model B) running Raspbian Stretch (4.9.41-v7+). The Raspberry Pi (RPi) is not only a standard platform, but also a close surrogate to mobile phones, using the same Cortex-A7 in many phones. We then transferred the trained models to the RPi, using the same frameworks for evaluation. We plugged the RPi into a Watts Up Pro meter, a power meter that can be read programatically over USB at a frequency of 1 Hz. For the QRNNs, we used the first 350 words of the test set, and averaged the ms/query and mJ/query. For KN-5, we used the entire test set for evaluation, since the latency was much lower. To adjust for the base power load, we subtracted idle power draw from energy usage.", "For a different perspective, we further evaluated all the models under a desktop environment, using an i7-4790k CPU and Titan V GPU. Because the base power load for powering a desktop is much higher than running neural language models, we collected only latency statistics. We used the entire test set, since the QRNN runs quickly.", "In addition to energy and latency, another consideration for the NLP developer selecting an operating point is the cost of underlying hardware. For our setup, the RPi costs $35 USD, the CPU costs$350 USD, and the GPU costs $3000 USD." ], [ "To demonstrate the effectiveness of the QRNN models, we present the results of past and current state-of-the-art neural language models in Table 1 ; we report the Skip- and AWD-LSTM results as seen in the original papers, while we report our QRNN results. Skip LSTM denotes the four-layer Skip LSTM in BIBREF3 . BIBREF20 focus on Hebbian softmax, a model extension technique—Rae-LSTM refers to their base LSTM model without any extensions. In our results, KN-5 refers to the traditional five-gram model with modified Kneser-Ney smoothing, and AWD is shorthand for AWD-LSTM.", "Perplexity–recall scale. In Figure 1 , using KN-5 as the model, we plot the log perplexity (cross entropy) and R@3 error ($1 - \\text{R@3}$) for every sentence in PTB and WT103. The horizontal clusters arise from multiple perplexity points representing the same R@3 value, as explained in Section \"Infrastructure\" . We also observe that the perplexity–recall scale is non-linear—instead, log perplexity appears to have a moderate linear relationship with R@3 error on PTB ($r=0.85$), and an even stronger relationship on WT103 ($r=0.94$). This is partially explained by WT103 having much longer sentences, and thus less noisy statistics.", "From Figure 1 , we find that QRNN models yield strongly linear log perplexity–recall plots as well, where$r=0.88$and$r=0.93$for PTB and WT103, respectively. Note that, due to the improved model quality over KN-5, the point clouds are shifted downward compared to Figure 1 . We conclude that log perplexity, or cross entropy, provides a more human-understandable indicator of R@3 than perplexity does. Overall, these findings agree with those from BIBREF21 , which explores the log perplexity–word error rate scale in language modeling for speech recognition.", "Quality–performance tradeoff. In Table 2 , from left to right, we report perplexity results on the validation and test sets, R@3 on test, and finally per-query latency and energy usage. On the RPi, KN-5 is both fast and power-efficient to run, using only about 7 ms/query and 6 mJ/query for PTB (Table 2 , row 1), and 264 ms/q and 229 mJ/q on WT103 (row 5). Taking 220 ms/query and consuming 300 mJ/query, AWD-LSTM and ptb-qrnn are still viable for mobile phones: The modern smartphone holds upwards of 10,000 joules BIBREF22 , and the latency is within usability standards BIBREF23 . Nevertheless, the models are still 49$\\times $slower and 32$\\times $more power-hungry than KN-5. The wt103-qrnn model is completely unusable on phones, taking over 1.2 seconds per next-word prediction. Neural models achieve perplexity drops of 60–80% and R@3 increases of 22–34%, but these improvements come at a much higher cost in latency and energy usage.", "In Table 2 (last two columns), the desktop yields very different results: the neural models on PTB (rows 2–3) are 9$\\times $slower than KN-5, but the absolute latency is only 8 ms/q, which is still much faster than what humans perceive as instantaneous BIBREF23 . If a high-end commodity GPU is available, then the models are only twice as slow as KN-5 is. From row 5, even better results are noted with wt103-qrnn: On the CPU, the QRNN is only 60% slower than KN-5 is, while the model is faster by 11$\\times $on a GPU. These results suggest that, if only latency is considered under a commodity desktop environment, the QRNN model is humanly indistinguishable from the KN-5 model, even without using GPU acceleration." ], [ "In the present work, we describe and examine the tradeoff space between quality and performance for the task of language modeling. Specifically, we explore the quality–performance tradeoffs between KN-5, a non-neural approach, and AWD-LSTM and QRNN, two neural language models. We find that with decreased perplexity comes vastly increased computational requirements: In one of the NLMs, a perplexity reduction by 2.5$\\times $results in a 49$\\times $rise in latency and 32$\\times $increase in energy usage, when compared to KN-5." ] ] } { "question": [ "What aspects have been compared between various language models?", "what classic language models are mentioned in the paper?", "What is a commonly used evaluation metric for language models?" ], "question_id": [ "dd155f01f6f4a14f9d25afc97504aefdc6d29c13", "a9d530d68fb45b52d9bad9da2cd139db5a4b2f7c", "e07df8f613dbd567a35318cd6f6f4cb959f5c82d" ], "nlp_background": [ "two", "two", "two" ], "topic_background": [ "familiar", "familiar", "familiar" ], "paper_read": [ "no", "no", "no" ], "search_query": [ "", "", "" ], "question_writer": [ "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "Quality measures using perplexity and recall, and performance measured using latency and energy usage. ", "evidence": [ "For each model, we examined word-level perplexity, R@3 in next-word prediction, latency (ms/q), and energy usage (mJ/q). To explore the perplexity–recall relationship, we collected individual perplexity and recall statistics for each sentence in the test set." ], "highlighted_evidence": [ "For each model, we examined word-level perplexity, R@3 in next-word prediction, latency (ms/q), and energy usage (mJ/q). To explore the perplexity–recall relationship, we collected individual perplexity and recall statistics for each sentence in the test set." ] } ], "annotation_id": [ "c17796e0bd3bfcc64d5a8e844d23d8d39274af6b" ], "worker_id": [ "1ba1b5b562aef9cd264cace5b7bdd46a7c065c0a" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "Kneser–Ney smoothing" ], "yes_no": null, "free_form_answer": "", "evidence": [ "In this paper, we examine the quality–performance tradeoff in the shift from non-neural to neural language models. In particular, we compare Kneser–Ney smoothing, widely accepted as the state of the art prior to NLMs, to the best NLMs today. The decrease in perplexity on standard datasets has been well documented BIBREF3 , but to our knowledge no one has examined the performances tradeoffs. With deployment on a mobile device in mind, we evaluate energy usage and inference latency on a Raspberry Pi (which shares the same ARM architecture as nearly all smartphones today). We find that a 2.5$\\times $reduction in perplexity on PTB comes at a staggering cost in terms of performance: inference with NLMs takes 49$\\times $longer and requires 32$\\times $more energy. Furthermore, we find that impressive reductions in perplexity translate into at best modest improvements in next-word prediction, which is arguable a better metric for evaluating software keyboards on a smartphone. The contribution of this paper is the first known elucidation of this quality–performance tradeoff. Note that we refrain from prescriptive recommendations: whether or not a tradeoff is worthwhile depends on the application. Nevertheless, NLP engineers should arguably keep these tradeoffs in mind when selecting a particular operating point." ], "highlighted_evidence": [ "Kneser–Ney smoothing", "In particular, we compare Kneser–Ney smoothing, widely accepted as the state of the art prior to NLMs, to the best NLMs today." ] } ], "annotation_id": [ "715840b32a89c33e0a1de1ab913664eb9694bd34" ], "worker_id": [ "1ba1b5b562aef9cd264cace5b7bdd46a7c065c0a" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "perplexity" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Deep learning has unquestionably advanced the state of the art in many natural language processing tasks, from syntactic dependency parsing BIBREF0 to named-entity recognition BIBREF1 to machine translation BIBREF2 . The same certainly applies to language modeling, where recent advances in neural language models (NLMs) have led to dramatically better approaches as measured using standard metrics such as perplexity BIBREF3 , BIBREF4 ." ], "highlighted_evidence": [ "Deep learning has unquestionably advanced the state of the art in many natural language processing tasks, from syntactic dependency parsing BIBREF0 to named-entity recognition BIBREF1 to machine translation BIBREF2 . The same certainly applies to language modeling, where recent advances in neural language models (NLMs) have led to dramatically better approaches as measured using standard metrics such as perplexity BIBREF3 , BIBREF4 ." ] }, { "unanswerable": false, "extractive_spans": [ "perplexity" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Deep learning has unquestionably advanced the state of the art in many natural language processing tasks, from syntactic dependency parsing BIBREF0 to named-entity recognition BIBREF1 to machine translation BIBREF2 . The same certainly applies to language modeling, where recent advances in neural language models (NLMs) have led to dramatically better approaches as measured using standard metrics such as perplexity BIBREF3 , BIBREF4 ." ], "highlighted_evidence": [ "recent advances in neural language models (NLMs) have led to dramatically better approaches as measured using standard metrics such as perplexity BIBREF3 , BIBREF4 ." ] } ], "annotation_id": [ "062dcccfdfb5af1c6ee886885703f9437d91a9dc", "1cc952fc047d0bb1a961c3ce65bada2e983150d1" ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "1ba1b5b562aef9cd264cace5b7bdd46a7c065c0a" ] } ] } 1805.02400 Stay On-Topic: Generating Context-specific Fake Restaurant Reviews Automatically generated fake restaurant reviews are a threat to online review systems. Recent research has shown that users have difficulties in detecting machine-generated fake reviews hiding among real restaurant reviews. The method used in this work (char-LSTM ) has one drawback: it has difficulties staying in context, i.e. when it generates a review for specific target entity, the resulting review may contain phrases that are unrelated to the target, thus increasing its detectability. In this work, we present and evaluate a more sophisticated technique based on neural machine translation (NMT) with which we can generate reviews that stay on-topic. We test multiple variants of our technique using native English speakers on Amazon Mechanical Turk. We demonstrate that reviews generated by the best variant have almost optimal undetectability (class-averaged F-score 47%). We conduct a user study with skeptical users and show that our method evades detection more frequently compared to the state-of-the-art (average evasion 3.2/4 vs 1.5/4) with statistical significance, at level {\alpha} = 1% (Section 4.3). We develop very effective detection tools and reach average F-score of 97% in classifying these. Although fake reviews are very effective in fooling people, effective automatic detection is still feasible. { "section_name": [ "Introduction", "Background", "System Model", "Attack Model", "Generative Model" ], "paragraphs": [ [ "Automatically generated fake reviews have only recently become natural enough to fool human readers. Yao et al. BIBREF0 use a deep neural network (a so-called 2-layer LSTM BIBREF1 ) to generate fake reviews, and concluded that these fake reviews look sufficiently genuine to fool native English speakers. They train their model using real restaurant reviews from yelp.com BIBREF2 . Once trained, the model is used to generate reviews character-by-character. Due to the generation methodology, it cannot be easily targeted for a specific context (meaningful side information). Consequently, the review generation process may stray off-topic. For instance, when generating a review for a Japanese restaurant in Las Vegas, the review generation process may include references to an Italian restaurant in Baltimore. The authors of BIBREF0 apply a post-processing step (customization), which replaces food-related words with more suitable ones (sampled from the targeted restaurant). The word replacement strategy has drawbacks: it can miss certain words and replace others independent of their surrounding words, which may alert savvy readers. As an example: when we applied the customization technique described in BIBREF0 to a review for a Japanese restaurant it changed the snippet garlic knots for breakfast with garlic knots for sushi).", "We propose a methodology based on neural machine translation (NMT) that improves the generation process by defining a context for the each generated fake review. Our context is a clear-text sequence of: the review rating, restaurant name, city, state and food tags (e.g. Japanese, Italian). We show that our technique generates review that stay on topic. We can instantiate our basic technique into several variants. We vet them on Amazon Mechanical Turk and find that native English speakers are very poor at recognizing our fake generated reviews. For one variant, the participants' performance is close to random: the class-averaged F-score of detection is INLINEFORM0 (whereas random would be INLINEFORM1 given the 1:6 imbalance in the test). Via a user study with experienced, highly educated participants, we compare this variant (which we will henceforth refer to as NMT-Fake* reviews) with fake reviews generated using the char-LSTM-based technique from BIBREF0 .", "We demonstrate that NMT-Fake* reviews constitute a new category of fake reviews that cannot be detected by classifiers trained only using previously known categories of fake reviews BIBREF0 , BIBREF3 , BIBREF4 . Therefore, NMT-Fake* reviews may go undetected in existing online review sites. To meet this challenge, we develop an effective classifier that detects NMT-Fake* reviews effectively (97% F-score). Our main contributions are:" ], [ "Fake reviews User-generated content BIBREF5 is an integral part of the contemporary user experience on the web. Sites like tripadvisor.com, yelp.com and Google Play use user-written reviews to provide rich information that helps other users choose where to spend money and time. User reviews are used for rating services or products, and for providing qualitative opinions. User reviews and ratings may be used to rank services in recommendations. Ratings have an affect on the outwards appearance. Already 8 years ago, researchers estimated that a one-star rating increase affects the business revenue by 5 – 9% on yelp.com BIBREF6 .", "Due to monetary impact of user-generated content, some businesses have relied on so-called crowd-turfing agents BIBREF7 that promise to deliver positive ratings written by workers to a customer in exchange for a monetary compensation. Crowd-turfing ethics are complicated. For example, Amazon community guidelines prohibit buying content relating to promotions, but the act of writing fabricated content is not considered illegal, nor is matching workers to customers BIBREF8 . Year 2015, approximately 20% of online reviews on yelp.com were suspected of being fake BIBREF9 .", "Nowadays, user-generated review sites like yelp.com use filters and fraudulent review detection techniques. These factors have resulted in an increase in the requirements of crowd-turfed reviews provided to review sites, which in turn has led to an increase in the cost of high-quality review. Due to the cost increase, researchers hypothesize the existence of neural network-generated fake reviews. These neural-network-based fake reviews are statistically different from human-written fake reviews, and are not caught by classifiers trained on these BIBREF0 .", "Detecting fake reviews can either be done on an individual level or as a system-wide detection tool (i.e. regulation). Detecting fake online content on a personal level requires knowledge and skills in critical reading. In 2017, the National Literacy Trust assessed that young people in the UK do not have the skillset to differentiate fake news from real news BIBREF10 . For example, 20% of children that use online news sites in age group 12-15 believe that all information on news sites are true.", "Neural Networks Neural networks are function compositions that map input data through INLINEFORM0 subsequent layers: DISPLAYFORM0 ", "where the functions INLINEFORM0 are typically non-linear and chosen by experts partly for known good performance on datasets and partly for simplicity of computational evaluation. Language models (LMs) BIBREF11 are generative probability distributions that assign probabilities to sequences of tokens ( INLINEFORM1 ): DISPLAYFORM0 ", "such that the language model can be used to predict how likely a specific token at time step INLINEFORM0 is, based on the INLINEFORM1 previous tokens. Tokens are typically either words or characters.", "For decades, deep neural networks were thought to be computationally too difficult to train. However, advances in optimization, hardware and the availability of frameworks have shown otherwise BIBREF1 , BIBREF12 . Neural language models (NLMs) have been one of the promising application areas. NLMs are typically various forms of recurrent neural networks (RNNs), which pass through the data sequentially and maintain a memory representation of the past tokens with a hidden context vector. There are many RNN architectures that focus on different ways of updating and maintaining context vectors: Long Short-Term Memory units (LSTM) and Gated Recurrent Units (GRUs) are perhaps most popular. Neural LMs have been used for free-form text generation. In certain application areas, the quality has been high enough to sometimes fool human readers BIBREF0 . Encoder-decoder (seq2seq) models BIBREF13 are architectures of stacked RNNs, which have the ability to generate output sequences based on input sequences. The encoder network reads in a sequence of tokens, and passes it to a decoder network (a LM). In contrast to simpler NLMs, encoder-decoder networks have the ability to use additional context for generating text, which enables more accurate generation of text. Encoder-decoder models are integral in Neural Machine Translation (NMT) BIBREF14 , where the task is to translate a source text from one language to another language. NMT models additionally use beam search strategies to heuristically search the set of possible translations. Training datasets are parallel corpora; large sets of paired sentences in the source and target languages. The application of NMT techniques for online machine translation has significantly improved the quality of translations, bringing it closer to human performance BIBREF15 .", "Neural machine translation models are efficient at mapping one expression to another (one-to-one mapping). Researchers have evaluated these models for conversation generation BIBREF16 , with mixed results. Some researchers attribute poor performance to the use of the negative log likelihood cost function during training, which emphasizes generation of high-confidence phrases rather than diverse phrases BIBREF17 . The results are often generic text, which lacks variation. Li et al. have suggested various augmentations to this, among others suppressing typical responses in the decoder language model to promote response diversity BIBREF17 ." ], [ "We discuss the attack model, our generative machine learning method and controlling the generative process in this section." ], [ "Wang et al. BIBREF7 described a model of crowd-turfing attacks consisting of three entities: customers who desire to have fake reviews for a particular target (e.g. their restaurant) on a particular platform (e.g. Yelp), agents who offer fake review services to customers, and workers who are orchestrated by the agent to compose and post fake reviews.", "Automated crowd-turfing attacks (ACA) replace workers by a generative model. This has several benefits including better economy and scalability (human workers are more expensive and slower) and reduced detectability (agent can better control the rate at which fake reviews are generated and posted).", "We assume that the agent has access to public reviews on the review platform, by which it can train its generative model. We also assume that it is easy for the agent to create a large number of accounts on the review platform so that account-based detection or rate-limiting techniques are ineffective against fake reviews.", "The quality of the generative model plays a crucial role in the attack. Yao et al. BIBREF0 propose the use of a character-based LSTM as base for generative model. LSTMs are not conditioned to generate reviews for a specific target BIBREF1 , and may mix-up concepts from different contexts during free-form generation. Mixing contextually separate words is one of the key criteria that humans use to identify fake reviews. These may result in violations of known indicators for fake content BIBREF18 . For example, the review content may not match prior expectations nor the information need that the reader has. We improve the attack model by considering a more capable generative model that produces more appropriate reviews: a neural machine translation (NMT) model." ], [ "We propose the use of NMT models for fake review generation. The method has several benefits: 1) the ability to learn how to associate context (keywords) to reviews, 2) fast training time, and 3) a high-degree of customization during production time, e.g. introduction of specific waiter or food items names into reviews.", "NMT models are constructions of stacked recurrent neural networks (RNNs). They include an encoder network and a decoder network, which are jointly optimized to produce a translation of one sequence to another. The encoder rolls over the input data in sequence and produces one INLINEFORM0 -dimensional context vector representation for the sentence. The decoder then generates output sequences based on the embedding vector and an attention module, which is taught to associate output words with certain input words. The generation typically continues until a specific EOS (end of sentence) token is encountered. The review length can be controlled in many ways, e.g. by setting the probability of generating the EOS token to zero until the required length is reached.", "NMT models often also include a beam search BIBREF14 , which generates several hypotheses and chooses the best ones amongst them. In our work, we use the greedy beam search technique. We forgo the use of additional beam searches as we found that the quality of the output was already adequate and the translation phase time consumption increases linearly for each beam used.", "We use the Yelp Challenge dataset BIBREF2 for our fake review generation. The dataset (Aug 2017) contains 2.9 million 1 –5 star restaurant reviews. We treat all reviews as genuine human-written reviews for the purpose of this work, since wide-scale deployment of machine-generated review attacks are not yet reported (Sep 2017) BIBREF19 . As preprocessing, we remove non-printable (non-ASCII) characters and excessive white-space. We separate punctuation from words. We reserve 15,000 reviews for validation and 3,000 for testing, and the rest we use for training. NMT models require a parallel corpus of source and target sentences, i.e. a large set of (source, target)-pairs. We set up a parallel corpus by constructing (context, review)-pairs from the dataset. Next, we describe how we created our input context.", "The Yelp Challenge dataset includes metadata about restaurants, including their names, food tags, cities and states these restaurants are located in. For each restaurant review, we fetch this metadata and use it as our input context in the NMT model. The corresponding restaurant review is similarly set as the target sentence. This method produced 2.9 million pairs of sentences in our parallel corpus. We show one example of the parallel training corpus in Example 1 below:", "5 Public House Las Vegas NV Gastropubs Restaurants > Excellent", "food and service . Pricey , but well worth it . I would recommend", "the bone marrow and sampler platter for appetizers . \\end{verbatim}", " ", " ", "\\noindent The order {\\textbf{[rating name city state tags]}} is kept constant.", "Training the model conditions it to associate certain sequences of words in the input sentence with others in the output.", " ", "\\subsubsection{Training Settings}", " ", "We train our NMT model on a commodity PC with a i7-4790k CPU (4.00GHz), with 32GB RAM and one NVidia GeForce GTX 980 GPU. Our system can process approximately 1,300 \\textendash 1,500 source tokens/s and approximately 5,730 \\textendash 5,830 output tokens/s. Training one epoch takes in average 72 minutes. The model is trained for 8 epochs, i.e. over night. We call fake review generated by this model \\emph{NMT-Fake reviews}. We only need to train one model to produce reviews of different ratings.", "We use the training settings: adam optimizer \\cite{kingma2014adam} with the suggested learning rate 0.001 \\cite{klein2017opennmt}. For most parts, parameters are at their default values. Notably, the maximum sentence length of input and output is 50 tokens by default.", "We leverage the framework openNMT-py \\cite{klein2017opennmt} to teach the our NMT model.", "We list used openNMT-py commands in Appendix Table~\\ref{table:openNMT-py_commands}.", " ", "\\begin{figure}[t]", "\\begin{center}", " \\begin{tabular}{ | l | }", " \\hline", "Example 2. Greedy NMT \\\\", "Great food, \\underline{great} service, \\underline{great} \\textit{\\textit{beer selection}}. I had the \\textit{Gastropubs burger} and it", "\\\\", "was delicious. The \\underline{\\textit{beer selection}} was also \\underline{great}. \\\\", "\\\\", "Example 3. NMT-Fake* \\\\", "I love this restaurant. Great food, great service. It's \\textit{a little pricy} but worth\\\\", "it for the \\textit{quality} of the \\textit{beer} and atmosphere you can see in \\textit{Vegas}", "\\\\", " \\hline", " \\end{tabular}", " \\label{table:output_comparison}", "\\end{center}", "\\caption{Na\\\"{i}ve text generation with NMT vs. generation using our NTM model. Repetitive patterns are \\underline{underlined}. Contextual words are \\emph{italicized}. Both examples here are generated based on the context given in Example~1.}", "\\label{fig:comparison}", "\\end{figure}", " ", "\\subsection{Controlling generation of fake reviews}", "\\label{sec:generating}", " ", "Greedy NMT beam searches are practical in many NMT cases. However, the results are simply repetitive, when naively applied to fake review generation (See Example~2 in Figure~\\ref{fig:comparison}).", "The NMT model produces many \\emph{high-confidence} word predictions, which are repetitive and obviously fake. We calculated that in fact, 43\\% of the generated sentences started with the phrase Great food''. The lack of diversity in greedy use of NMTs for text generation is clear.", " ", " ", "\\begin{algorithm}[!b]", " \\KwData{Desired review context$C_\\mathrm{input}$(given as cleartext), NMT model}", " \\KwResult{Generated review$out$for input context$C_\\mathrm{input}$}", "set$b=0.3$,$\\lambda=-5$,$\\alpha=\\frac{2}{3}$,$p_\\mathrm{typo}$,$p_\\mathrm{spell}$\\\\", "$\\log p \\leftarrow \\text{NMT.decode(NMT.encode(}C_\\mathrm{input}\\text{))}$\\\\", "out$\\leftarrow$[~] \\\\", "$i \\leftarrow 0$\\\\", "$\\log p \\leftarrow \\text{Augment}(\\log p$,$b$,$\\lambda$,$1$,$[~]$, 0)~~~~~~~~~~~~~~~ |~random penalty~\\\\", "\\While{$i=0$or$o_i$not EOS}{", "$\\log \\Tilde{p} \\leftarrow \\text{Augment}(\\log p$,$b$,$\\lambda$,$\\alpha$,$o_i$,$i$)~~~~~~~~~~~ |~start \\& memory penalty~\\\\", "$o_i \\leftarrow$\\text{NMT.beam}($\\log \\Tilde{p}$, out) \\\\", "out.append($o_i$) \\\\", "$i \\leftarrow i+1$", "}\\text{return}~$\\text{Obfuscate}$(out,~$p_\\mathrm{typo}$,~$p_\\mathrm{spell}$)", "\\caption{Generation of NMT-Fake* reviews.}", "\\label{alg:base}", "\\end{algorithm}", " ", "In this work, we describe how we succeeded in creating more diverse and less repetitive generated reviews, such as Example 3 in Figure~\\ref{fig:comparison}.", "We outline pseudocode for our methodology of generating fake reviews in Algorithm~\\ref{alg:base}. There are several parameters in our algorithm.", "The details of the algorithm will be shown later.", "We modify the openNMT-py translation phase by changing log-probabilities before passing them to the beam search.", "We notice that reviews generated with openNMT-py contain almost no language errors. As an optional post-processing step, we obfuscate reviews by introducing natural typos/misspellings randomly. In the next sections, we describe how we succeeded in generating more natural sentences from our NMT model, i.e. generating reviews like Example~3 instead of reviews like Example~2.", " ", "\\subsubsection{Variation in word content}", " ", "Example 2 in Figure~\\ref{fig:comparison} repeats commonly occurring words given for a specific context (e.g. \\textit{great, food, service, beer, selection, burger} for Example~1). Generic review generation can be avoided by decreasing probabilities (log-likelihoods \\cite{murphy2012machine}) of the generators LM, the decoder.", "We constrain the generation of sentences by randomly \\emph{imposing penalties to words}.", "We tried several forms of added randomness, and found that adding constant penalties to a \\emph{random subset} of the target words resulted in the most natural sentence flow. We call these penalties \\emph{Bernoulli penalties}, since the random variables are chosen as either 1 or 0 (on or off).", " ", " ", "\\paragraph{Bernoulli penalties to language model}", "To avoid generic sentences components, we augment the default language model$p(\\cdot)$of the decoder by", " ", "\$$", "\\log \\Tilde{p}(t_k) = \\log p(t_k | t_i, \\dots, t_1) + \\lambda q,", "\$$", " ", "where$q \\in R^{V}$is a vector of Bernoulli-distributed random values that obtain values$1$with probability$b$and value$0$with probability$1-b_i$, and$\\lambda < 0$. Parameter$b$controls how much of the vocabulary is forgotten and$\\lambda$is a soft penalty of including forgotten'' words in a review.", "$\\lambda q_k$emphasizes sentence forming with non-penalized words. The randomness is reset at the start of generating a new review.", "Using Bernoulli penalties in the language model, we can forget'' a certain proportion of words and essentially force'' the creation of less typical sentences. We will test the effect of these two parameters, the Bernoulli probability$b$and log-likelihood penalty of including forgotten'' words$\\lambda$, with a user study in Section~\\ref{sec:varying}.", " ", "\\paragraph{Start penalty}", "We introduce start penalties to avoid generic sentence starts (e.g. Great food, great service''). Inspired by \\cite{li2016diversity}, we add a random start penalty$\\lambda s^\\mathrm{i}$, to our language model, which decreases monotonically for each generated token. We set$\\alpha \\leftarrow 0.66$as it's effect decreases by 90\\% every 5 words generated.", " ", "\\paragraph{Penalty for reusing words}", "Bernoulli penalties do not prevent excessive use of certain words in a sentence (such as \\textit{great} in Example~2).", "To avoid excessive reuse of words, we included a memory penalty for previously used words in each translation.", "Concretely, we add the penalty$\\lambda$to each word that has been generated by the greedy search.", " ", "\\subsubsection{Improving sentence coherence}", "\\label{sec:grammar}", "We visually analyzed reviews after applying these penalties to our NMT model. While the models were clearly diverse, they were \\emph{incoherent}: the introduction of random penalties had degraded the grammaticality of the sentences. Amongst others, the use of punctuation was erratic, and pronouns were used semantically wrongly (e.g. \\emph{he}, \\emph{she} might be replaced, as could and''/but''). To improve the authenticity of our reviews, we added several \\emph{grammar-based rules}.", " ", "English language has several classes of words which are important for the natural flow of sentences.", "We built a list of common pronouns (e.g. I, them, our), conjunctions (e.g. and, thus, if), punctuation (e.g. ,/.,..), and apply only half memory penalties for these words. We found that this change made the reviews more coherent. The pseudocode for this and the previous step is shown in Algorithm~\\ref{alg:aug}.", "The combined effect of grammar-based rules and LM augmentation is visible in Example~3, Figure~\\ref{fig:comparison}.", " ", "\\begin{algorithm}[!t]", " \\KwData{Initial log LM$\\log p$, Bernoulli probability$b$, soft-penalty$\\lambda$, monotonic factor$\\alpha$, last generated token$o_i$, grammar rules set$G$}", " \\KwResult{Augmented log LM$\\log \\Tilde{p}$}", "\\begin{algorithmic}[1]", "\\Procedure {Augment}{$\\log p$,$b$,$\\lambda$,$\\alpha$,$o_i$,$i$}{ \\\\", "generate$P_{\\mathrm{1:N}} \\leftarrow Bernoulli(b)$~~~~~~~~~~~~~~~|~$\\text{One value} \\in \\{0,1\\}~\\text{per token}$~ \\\\", "$I \\leftarrow P>0$~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~|~Select positive indices~\\\\", "$\\log \\Tilde{p} \\leftarrow\\text{Discount}$($\\log p$,$I$,$\\lambda \\cdot \\alpha^i$,$G$) ~~~~~~ |~start penalty~\\\\", "$\\log \\Tilde{p} \\leftarrow\\text{Discount}$($\\log \\Tilde{p}$,$[o_i]$,$\\lambda$,$G$) ~~~~~~~~~ |~memory penalty~\\\\", "\\textbf{return}~$\\log \\Tilde{p}$", "}", "\\EndProcedure", "\\\\", "\\Procedure {Discount}{$\\log p$,$I$,$\\lambda$,$G$}{", "\\State{\\For{$i \\in I$}{", "\\eIf{$o_i \\in G$}{", "$\\log p_{i} \\leftarrow \\log p_{i} + \\lambda/2$", "}{", "$\\log p_{i} \\leftarrow \\log p_{i} + \\lambda$}", "}\\textbf{return}~$\\log p$", "\\EndProcedure", "}}", "\\end{algorithmic}", "\\caption{Pseudocode for augmenting language model. }", "\\label{alg:aug}", "\\end{algorithm}", " ", "\\subsubsection{Human-like errors}", "\\label{sec:obfuscation}", "We notice that our NMT model produces reviews without grammar mistakes.", "This is unlike real human writers, whose sentences contain two types of language mistakes 1) \\emph{typos} that are caused by mistakes in the human motoric input, and 2) \\emph{common spelling mistakes}.", "We scraped a list of common English language spelling mistakes from Oxford dictionary\\footnote{\\url{https://en.oxforddictionaries.com/spelling/common-misspellings}} and created 80 rules for randomly \\emph{re-introducing spelling mistakes}.", "Similarly, typos are randomly reintroduced based on the weighted edit distance\\footnote{\\url{https://pypi.python.org/pypi/weighted-levenshtein/0.1}}, such that typos resulting in real English words with small perturbations are emphasized.", "We use autocorrection tools\\footnote{\\url{https://pypi.python.org/pypi/autocorrect/0.1.0}} for finding these words.", "We call these augmentations \\emph{obfuscations}, since they aim to confound the reader to think a human has written them. We omit the pseudocode description for brevity.", " ", "\\subsection{Experiment: Varying generation parameters in our NMT model}", "\\label{sec:varying}", " ", "Parameters$b$and$\\lambda$control different aspects in fake reviews.", "We show six different examples of generated fake reviews in Table~\\ref{table:categories}.", "Here, the largest differences occur with increasing values of$b$: visibly, the restaurant reviews become more extreme.", "This occurs because a large portion of vocabulary is forgotten''. Reviews with$b \\geq 0.7$contain more rare word combinations, e.g. !!!!!'' as punctuation, and they occasionally break grammaticality (''experience was awesome'').", "Reviews with lower$b$are more generic: they contain safe word combinations like Great place, good service'' that occur in many reviews. Parameter$\\lambda$'s is more subtle: it affects how random review starts are and to a degree, the discontinuation between statements within the review.", "We conducted an Amazon Mechanical Turk (MTurk) survey in order to determine what kind of NMT-Fake reviews are convincing to native English speakers. We describe the survey and results in the next section.", " ", " ", "\\begin{table}[!b]", "\\caption{Six different parametrizations of our NMT reviews and one example for each. The context is 5 P~.~F~.~Chang ' s Scottsdale AZ'' in all examples.}", "\\begin{center}", " \\begin{tabular}{ | l | l | }", " \\hline", "$(b, \\lambda)$& Example review for context \\\\ \\hline", " \\hline", "$(0.3, -3)$& I love this location! Great service, great food and the best drinks in Scottsdale. \\\\", " & The staff is very friendly and always remembers u when we come in\\\\\\hline", "$(0.3, -5)$& Love love the food here! I always go for lunch. They have a great menu and \\\\", " & they make it fresh to order. Great place, good service and nice staff\\\\\\hline", "$(0.5, -4)$& I love their chicken lettuce wraps and fried rice!! The service is good, they are\\\\", " & always so polite. They have great happy hour specials and they have a lot\\\\", " & of options.\\\\\\hline", "$(0.7, -3)$& Great place to go with friends! They always make sure your dining \\\\", " & experience was awesome.\\\\ \\hline", "$(0.7, -5)$& Still haven't ordered an entree before but today we tried them once..\\\\", " & both of us love this restaurant....\\\\\\hline", "$(0.9, -4)$& AMAZING!!!!! Food was awesome with excellent service. Loved the lettuce \\\\", " & wraps. Great drinks and wine! Can't wait to go back so soon!!\\\\ \\hline", " \\end{tabular}", " \\label{table:categories}", "\\end{center}", "\\end{table}", " ", "\\subsubsection{MTurk study}", "\\label{sec:amt}", "We created 20 jobs, each with 100 questions, and requested master workers in MTurk to complete the jobs.", "We randomly generated each survey for the participants. Each review had a 50\\% chance to be real or fake. The fake ones further were chosen among six (6) categories of fake reviews (Table~\\ref{table:categories}).", "The restaurant and the city was given as contextual information to the participants. Our aim was to use this survey to understand how well English-speakers react to different parametrizations of NMT-Fake reviews.", "Table~\\ref{table:amt_pop} in Appendix summarizes the statistics for respondents in the survey. All participants were native English speakers from America. The base rate (50\\%) was revealed to the participants prior to the study.", " ", "We first investigated overall detection of any NMT-Fake reviews (1,006 fake reviews and 994 real reviews). We found that the participants had big difficulties in detecting our fake reviews. In average, the reviews were detected with class-averaged \\emph{F-score of only 56\\%}, with 53\\% F-score for fake review detection and 59\\% F-score for real review detection. The results are very close to \\emph{random detection}, where precision, recall and F-score would each be 50\\%. Results are recorded in Table~\\ref{table:MTurk_super}. Overall, the fake review generation is very successful, since human detection rate across categories is close to random.", " ", "\\begin{table}[t]", "\\caption{Effectiveness of Mechanical Turkers in distinguishing human-written reviews from fake reviews generated by our NMT model (all variants).}", "\\begin{center}", " \\begin{tabular}{ | c | c |c |c | c | }", " \\hline", " \\multicolumn{5}{|c|}{Classification report}", " \\\\ \\hline", " Review Type & Precision & Recall & F-score & Support \\\\ \\hline", " \\hline", " Human & 55\\% & 63\\% & 59\\% & 994\\\\", " NMT-Fake & 57\\% & 50\\% & 53\\% & 1006 \\\\", " \\hline", " \\end{tabular}", " \\label{table:MTurk_super}", "\\end{center}", "\\end{table}", " ", "We noticed some variation in the detection of different fake review categories. The respondents in our MTurk survey had most difficulties recognizing reviews of category$(b=0.3, \\lambda=-5)$, where true positive rate was$40.4\\%$, while the true negative rate of the real class was$62.7\\%$. The precision were$16\\%$and$86\\%$, respectively. The class-averaged F-score is$47.6\\%$, which is close to random. Detailed classification reports are shown in Table~\\ref{table:MTurk_sub} in Appendix. Our MTurk-study shows that \\emph{our NMT-Fake reviews pose a significant threat to review systems}, since \\emph{ordinary native English-speakers have very big difficulties in separating real reviews from fake reviews}. We use the review category$(b=0.3, \\lambda=-5)$for future user tests in this paper, since MTurk participants had most difficulties detecting these reviews. We refer to this category as NMT-Fake* in this paper.", " ", "\\section{Evaluation}", "\\graphicspath{ {figures/}}", " ", "We evaluate our fake reviews by first comparing them statistically to previously proposed types of fake reviews, and proceed with a user study with experienced participants. We demonstrate the statistical difference to existing fake review types \\cite{yao2017automated,mukherjee2013yelp,rayana2015collective} by training classifiers to detect previous types and investigate classification performance.", " ", "\\subsection{Replication of state-of-the-art model: LSTM}", "\\label{sec:repl}", " ", "Yao et al. \\cite{yao2017automated} presented the current state-of-the-art generative model for fake reviews. The model is trained over the Yelp Challenge dataset using a two-layer character-based LSTM model.", "We requested the authors of \\cite{yao2017automated} for access to their LSTM model or a fake review dataset generated by their model. Unfortunately they were not able to share either of these with us. We therefore replicated their model as closely as we could, based on their paper and e-mail correspondence\\footnote{We are committed to sharing our code with bonafide researchers for the sake of reproducibility.}.", " ", "We used the same graphics card (GeForce GTX) and trained using the same framework (torch-RNN in lua). We downloaded the reviews from Yelp Challenge and preprocessed the data to only contain printable ASCII characters, and filtered out non-restaurant reviews. We trained the model for approximately 72 hours. We post-processed the reviews using the customization methodology described in \\cite{yao2017automated} and email correspondence. We call fake reviews generated by this model LSTM-Fake reviews.", " ", "\\subsection{Similarity to existing fake reviews}", "\\label{sec:automated}", " ", "We now want to understand how NMT-Fake* reviews compare to a) LSTM fake reviews and b) human-generated fake reviews. We do this by comparing the statistical similarity between these classes.", " ", "For a' (Figure~\\ref{fig:lstm}), we use the Yelp Challenge dataset. We trained a classifier using 5,000 random reviews from the Yelp Challenge dataset (human'') and 5,000 fake reviews generated by LSTM-Fake. Yao et al. \\cite{yao2017automated} found that character features are essential in identifying LSTM-Fake reviews. Consequently, we use character features (n-grams up to 3).", " ", "For b' (Figure~\\ref{fig:shill}),we the Yelp Shills'' dataset (combination of YelpZip \\cite{mukherjee2013yelp}, YelpNYC \\cite{mukherjee2013yelp}, YelpChi \\cite{rayana2015collective}). This dataset labels entries that are identified as fraudulent by Yelp's filtering mechanism (''shill reviews'')\\footnote{Note that shill reviews are probably generated by human shills \\cite{zhao2017news}.}. The rest are treated as genuine reviews from human users (''genuine''). We use 100,000 reviews from each category to train a classifier. We use features from the commercial psychometric tool LIWC2015 \\cite{pennebaker2015development} to generated features.", " ", "In both cases, we use AdaBoost (with 200 shallow decision trees) for training. For testing each classifier, we use a held out test set of 1,000 reviews from both classes in each case. In addition, we test 1,000 NMT-Fake* reviews. Figures~\\ref{fig:lstm} and~\\ref{fig:shill} show the results. The classification threshold of 50\\% is marked with a dashed line.", " ", "\\begin{figure}", " \\begin{subfigure}[b]{0.5\\columnwidth}", " \\includegraphics[width=\\columnwidth]{figures/lstm.png}", " \\caption{Human--LSTM reviews.}", " \\label{fig:lstm}", " \\end{subfigure}", " \\begin{subfigure}[b]{0.5\\columnwidth}", " \\includegraphics[width=\\columnwidth]{figures/distribution_shill.png}", " \\caption{Genuine--Shill reviews.}", " \\label{fig:shill}", " \\end{subfigure}", " \\caption{", " Histogram comparison of NMT-Fake* reviews with LSTM-Fake reviews and human-generated (\\emph{genuine} and \\emph{shill}) reviews. Figure~\\ref{fig:lstm} shows that a classifier trained to distinguish human'' vs. LSTM-Fake cannot distinguish human'' vs NMT-Fake* reviews. Figure~\\ref{fig:shill} shows NMT-Fake* reviews are more similar to \\emph{genuine} reviews than \\emph{shill} reviews.", " }", " \\label{fig:statistical_similarity}", "\\end{figure}", " ", "We can see that our new generated reviews do not share strong attributes with previous known categories of fake reviews. If anything, our fake reviews are more similar to genuine reviews than previous fake reviews. We thus conjecture that our NMT-Fake* fake reviews present a category of fake reviews that may go undetected on online review sites.", " ", " ", "\\subsection{Comparative user study}", "\\label{sec:comparison}", "We wanted to evaluate the effectiveness of fake reviews againsttech-savvy users who understand and know to expect machine-generated fake reviews. We conducted a user study with 20 participants, all with computer science education and at least one university degree. Participant demographics are shown in Table~\\ref{table:amt_pop} in the Appendix. Each participant first attended a training session where they were asked to label reviews (fake and genuine) and could later compare them to the correct answers -- we call these participants \\emph{experienced participants}.", "No personal data was collected during the user study.", " ", "Each person was given two randomly selected sets of 30 of reviews (a total of 60 reviews per person) with reviews containing 10 \\textendash 50 words each.", "Each set contained 26 (87\\%) real reviews from Yelp and 4 (13\\%) machine-generated reviews,", "numbers chosen based on suspicious review prevalence on Yelp~\\cite{mukherjee2013yelp,rayana2015collective}.", "One set contained machine-generated reviews from one of the two models (NMT ($b=0.3, \\lambda=-5$) or LSTM),", "and the other set reviews from the other in randomized order. The number of fake reviews was revealed to each participant in the study description. Each participant was requested to mark four (4) reviews as fake.", " ", "Each review targeted a real restaurant. A screenshot of that restaurant's Yelp page was shown to each participant prior to the study. Each participant evaluated reviews for one specific, randomly selected, restaurant. An example of the first page of the user study is shown in Figure~\\ref{fig:screenshot} in Appendix.", " ", "\\begin{figure}[!ht]", "\\centering", "\\includegraphics[width=.7\\columnwidth]{detection2.png}", "\\caption{Violin plots of detection rate in comparative study. Mean and standard deviations for number of detected fakes are$0.8\\pm0.7$for NMT-Fake* and$2.5\\pm1.0$for LSTM-Fake.$n=20$. A sample of random detection is shown as comparison.}", "\\label{fig:aalto}", "\\end{figure}", " ", " ", "Figure~\\ref{fig:aalto} shows the distribution of detected reviews of both types. A hypothetical random detector is shown for comparison.", "NMT-Fake* reviews are significantly more difficult to detect for our experienced participants. In average, detection rate (recall) is$20\\%$for NMT-Fake* reviews, compared to$61\\%$for LSTM-based reviews.", "The precision (and F-score) is the same as the recall in our study, since participants labeled 4 fakes in each set of 30 reviews \\cite{murphy2012machine}.", "The distribution of the detection across participants is shown in Figure~\\ref{fig:aalto}. \\emph{The difference is statistically significant with confidence level$99\\%$} (Welch's t-test).", "We compared the detection rate of NMT-Fake* reviews to a random detector, and find that \\emph{our participants detection rate of NMT-Fake* reviews is not statistically different from random predictions with 95\\% confidence level} (Welch's t-test).", " ", " ", "\\section{Defenses}", " ", "\\label{sec:detection}", " ", "We developed an AdaBoost-based classifier to detect our new fake reviews, consisting of 200 shallow decision trees (depth 2). The features we used are recorded in Table~\\ref{table:features_adaboost} (Appendix).", "We used word-level features based on spaCy-tokenization \\cite{honnibal-johnson:2015:EMNLP} and constructed n-gram representation of POS-tags and dependency tree tags. We added readability features from NLTK~\\cite{bird2004nltk}.", " ", "\\begin{figure}[ht]", "\\centering", "\\includegraphics[width=.7\\columnwidth]{obf_score_fair_2.png}", "\\caption{", "Adaboost-based classification of NMT-Fake and human-written reviews.", "Effect of varying$b$and$\\lambda$in fake review generation.", "The variant native speakers had most difficulties detecting is well detectable by AdaBoost (97\\%).}", "\\label{fig:adaboost_matrix_b_lambda}", "\\end{figure}", " ", " ", "Figure~\\ref{fig:adaboost_matrix_b_lambda} shows our AdaBoost classifier's class-averaged F-score at detecting different kind of fake reviews. The classifier is very effective in detecting reviews that humans have difficulties detecting. For example, the fake reviews MTurk users had most difficulty detecting ($b=0.3, \\lambda=-5$) are detected with an excellent 97\\% F-score.", "The most important features for the classification were counts for frequently occurring words in fake reviews (such as punctuation, pronouns, articles) as well as the readability feature Automated Readability Index''. We thus conclude that while NMT-Fake reviews are difficult to detect for humans, they can be well detected with the right tools.", " ", "\\section{Related Work}", " ", "Kumar and Shah~\\cite{kumar2018false} survey and categorize false information research. Automatically generated fake reviews are a form of \\emph{opinion-based false information}, where the creator of the review may influence reader's opinions or decisions.", "Yao et al. \\cite{yao2017automated} presented their study on machine-generated fake reviews. Contrary to us, they investigated character-level language models, without specifying a specific context before generation. We leverage existing NMT tools to encode a specific context to the restaurant before generating reviews.", "Supporting our study, Everett et al~\\cite{Everett2016Automated} found that security researchers were less likely to be fooled by Markov chain-generated Reddit comments compared to ordinary Internet users.", " ", "Diversification of NMT model outputs has been studied in \\cite{li2016diversity}. The authors proposed the use of a penalty to commonly occurring sentences (\\emph{n-grams}) in order to emphasize maximum mutual information-based generation.", "The authors investigated the use of NMT models in chatbot systems.", "We found that unigram penalties to random tokens (Algorithm~\\ref{alg:aug}) was easy to implement and produced sufficiently diverse responses.", " ", "\\section {Discussion and Future Work}", " ", "\\paragraph{What makes NMT-Fake* reviews difficult to detect?} First, NMT models allow the encoding of a relevant context for each review, which narrows down the possible choices of words that the model has to choose from. Our NMT model had a perplexity of approximately$25$, while the model of \\cite{yao2017automated} had a perplexity of approximately$90$\\footnote{Personal communication with the authors}. Second, the beam search in NMT models narrows down choices to natural-looking sentences. Third, we observed that the NMT model produced \\emph{better structure} in the generated sentences (i.e. a more coherent story).", " ", "\\paragraph{Cost of generating reviews} With our setup, generating one review took less than one second. The cost of generation stems mainly from the overnight training. Assuming an electricity cost of 16 cents / kWh (California) and 8 hours of training, training the NMT model requires approximately 1.30 USD. This is a 90\\% reduction in time compared to the state-of-the-art \\cite{yao2017automated}. Furthermore, it is possible to generate both positive and negative reviews with the same model.", " ", "\\paragraph{Ease of customization} We experimented with inserting specific words into the text by increasing their log likelihoods in the beam search. We noticed that the success depended on the prevalence of the word in the training set. For example, adding a +5 to \\emph{Mike} in the log-likelihood resulted in approximately 10\\% prevalence of this word in the reviews. An attacker can therefore easily insert specific keywords to reviews, which can increase evasion probability.", " ", "\\paragraph{Ease of testing} Our diversification scheme is applicable during \\emph{generation phase}, and does not affect the training setup of the network in any way. Once the NMT model is obtained, it is easy to obtain several different variants of NMT-Fake reviews by varying parameters$b$and$\\lambda$.", " ", " ", " ", "\\paragraph{Languages} The generation methodology is not per-se language-dependent. The requirement for successful generation is that sufficiently much data exists in the targeted language. However, our language model modifications require some knowledge of that target language's grammar to produce high-quality reviews.", " ", "\\paragraph{Generalizability of detection techniques} Currently, fake reviews are not universally detectable. Our results highlight that it is difficult to claim detection performance on unseen types of fake reviews (Section~\\ref{sec:automated}). We see this an open problem that deserves more attention in fake reviews research.", " ", "\\paragraph{Generalizability to other types of datasets} Our technique can be applied to any dataset, as long as there is sufficient training data for the NMT model. We used approximately 2.9 million reviews for this work.", " ", "\\section{Conclusion}", " ", "In this paper, we showed that neural machine translation models can be used to generate fake reviews that are very effective in deceiving even experienced, tech-savvy users.", "This supports anecdotal evidence \\cite{national2017commission}.", "Our technique is more effective than state-of-the-art \\cite{yao2017automated}.", "We conclude that machine-aided fake review detection is necessary since human users are ineffective in identifying fake reviews.", "We also showed that detectors trained using one type of fake reviews are not effective in identifying other types of fake reviews.", "Robust detection of fake reviews is thus still an open problem.", " ", " ", "\\section*{Acknowledgments}", "We thank Tommi Gr\\\"{o}ndahl for assistance in planning user studies and the", "participants of the user study for their time and feedback. We also thank", "Luiza Sayfullina for comments that improved the manuscript.", "We thank the authors of \\cite{yao2017automated} for answering questions about", "their work.", " ", " ", "\\bibliographystyle{splncs}", "\\begin{thebibliography}{10}", " ", "\\bibitem{yao2017automated}", "Yao, Y., Viswanath, B., Cryan, J., Zheng, H., Zhao, B.Y.:", "\\newblock Automated crowdturfing attacks and defenses in online review systems.", "\\newblock In: Proceedings of the 2017 ACM SIGSAC Conference on Computer and", " Communications Security, ACM (2017)", " ", "\\bibitem{murphy2012machine}", "Murphy, K.:", "\\newblock Machine learning: a probabilistic approach.", "\\newblock Massachusetts Institute of Technology (2012)", " ", "\\bibitem{challenge2013yelp}", "Yelp:", "\\newblock {Yelp Challenge Dataset} (2013)", " ", "\\bibitem{mukherjee2013yelp}", "Mukherjee, A., Venkataraman, V., Liu, B., Glance, N.:", "\\newblock What yelp fake review filter might be doing?", "\\newblock In: Seventh International AAAI Conference on Weblogs and Social Media", " (ICWSM). (2013)", " ", "\\bibitem{rayana2015collective}", "Rayana, S., Akoglu, L.:", "\\newblock Collective opinion spam detection: Bridging review networks and", " metadata.", "\\newblock In: {}Proceedings of the 21th ACM SIGKDD International Conference on", " Knowledge Discovery and Data Mining", " ", "\\bibitem{o2008user}", "{O'Connor}, P.:", "\\newblock {User-generated content and travel: A case study on Tripadvisor.com}.", "\\newblock Information and communication technologies in tourism 2008 (2008)", " ", "\\bibitem{luca2010reviews}", "Luca, M.:", "\\newblock {Reviews, Reputation, and Revenue: The Case of Yelp. com}.", "\\newblock {Harvard Business School} (2010)", " ", "\\bibitem{wang2012serf}", "Wang, G., Wilson, C., Zhao, X., Zhu, Y., Mohanlal, M., Zheng, H., Zhao, B.Y.:", "\\newblock Serf and turf: crowdturfing for fun and profit.", "\\newblock In: Proceedings of the 21st international conference on World Wide", " Web (WWW), ACM (2012)", " ", "\\bibitem{rinta2017understanding}", "Rinta-Kahila, T., Soliman, W.:", "\\newblock Understanding crowdturfing: The different ethical logics behind the", " clandestine industry of deception.", "\\newblock In: ECIS 2017: Proceedings of the 25th European Conference on", " Information Systems. (2017)", " ", "\\bibitem{luca2016fake}", "Luca, M., Zervas, G.:", "\\newblock Fake it till you make it: Reputation, competition, and yelp review", " fraud.", "\\newblock Management Science (2016)", " ", "\\bibitem{national2017commission}", "{National Literacy Trust}:", "\\newblock Commission on fake news and the teaching of critical literacy skills", " in schools URL:", " \\url{https://literacytrust.org.uk/policy-and-campaigns/all-party-parliamentary-group-literacy/fakenews/}.", " ", "\\bibitem{jurafsky2014speech}", "Jurafsky, D., Martin, J.H.:", "\\newblock Speech and language processing. Volume~3.", "\\newblock Pearson London: (2014)", " ", "\\bibitem{kingma2014adam}", "Kingma, D.P., Ba, J.:", "\\newblock Adam: A method for stochastic optimization.", "\\newblock arXiv preprint arXiv:1412.6980 (2014)", " ", "\\bibitem{cho2014learning}", "Cho, K., van Merrienboer, B., Gulcehre, C., Bahdanau, D., Bougares, F.,", " Schwenk, H., Bengio, Y.:", "\\newblock Learning phrase representations using rnn encoder--decoder for", " statistical machine translation.", "\\newblock In: Proceedings of the 2014 Conference on Empirical Methods in", " Natural Language Processing (EMNLP). (2014)", " ", "\\bibitem{klein2017opennmt}", "Klein, G., Kim, Y., Deng, Y., Senellart, J., Rush, A.:", "\\newblock Opennmt: Open-source toolkit for neural machine translation.", "\\newblock Proceedings of ACL, System Demonstrations (2017)", " ", "\\bibitem{wu2016google}", "Wu, Y., Schuster, M., Chen, Z., Le, Q.V., Norouzi, M., Macherey, W., Krikun,", " M., Cao, Y., Gao, Q., Macherey, K., et~al.:", "\\newblock Google's neural machine translation system: Bridging the gap between", " human and machine translation.", "\\newblock arXiv preprint arXiv:1609.08144 (2016)", " ", "\\bibitem{mei2017coherent}", "Mei, H., Bansal, M., Walter, M.R.:", "\\newblock Coherent dialogue with attention-based language models.", "\\newblock In: AAAI. (2017) 3252--3258", " ", "\\bibitem{li2016diversity}", "Li, J., Galley, M., Brockett, C., Gao, J., Dolan, B.:", "\\newblock A diversity-promoting objective function for neural conversation", " models.", "\\newblock In: Proceedings of NAACL-HLT. (2016)", " ", "\\bibitem{rubin2006assessing}", "Rubin, V.L., Liddy, E.D.:", "\\newblock Assessing credibility of weblogs.", "\\newblock In: AAAI Spring Symposium: Computational Approaches to Analyzing", " Weblogs. (2006)", " ", "\\bibitem{zhao2017news}", "news.com.au:", "\\newblock {The potential of AI generated 'crowdturfing' could undermine online", " reviews and dramatically erode public trust} URL:", " \\url{http://www.news.com.au/technology/online/security/the-potential-of-ai-generated-crowdturfing-could-undermine-online-reviews-and-dramatically-erode-public-trust/news-story/e1c84ad909b586f8a08238d5f80b6982}.", " ", "\\bibitem{pennebaker2015development}", "Pennebaker, J.W., Boyd, R.L., Jordan, K., Blackburn, K.:", "\\newblock {The development and psychometric properties of LIWC2015}.", "\\newblock Technical report (2015)", " ", "\\bibitem{honnibal-johnson:2015:EMNLP}", "Honnibal, M., Johnson, M.:", "\\newblock An improved non-monotonic transition system for dependency parsing.", "\\newblock In: Proceedings of the 2015 Conference on Empirical Methods in", " Natural Language Processing (EMNLP), ACM (2015)", " ", "\\bibitem{bird2004nltk}", "Bird, S., Loper, E.:", "\\newblock {NLTK: the natural language toolkit}.", "\\newblock In: Proceedings of the ACL 2004 on Interactive poster and", " demonstration sessions, Association for Computational Linguistics (2004)", " ", "\\bibitem{kumar2018false}", "Kumar, S., Shah, N.:", "\\newblock False information on web and social media: A survey.", "\\newblock arXiv preprint arXiv:1804.08559 (2018)", " ", "\\bibitem{Everett2016Automated}", "Everett, R.M., Nurse, J.R.C., Erola, A.:", "\\newblock The anatomy of online deception: What makes automated text", " convincing?", "\\newblock In: Proceedings of the 31st Annual ACM Symposium on Applied", " Computing. SAC '16, ACM (2016)", " ", "\\end{thebibliography}", " ", " ", " ", "\\section*{Appendix}", " ", "We present basic demographics of our MTurk study and the comparative study with experienced users in Table~\\ref{table:amt_pop}.", " ", "\\begin{table}", "\\caption{User study statistics.}", "\\begin{center}", " \\begin{tabular}{ | l | c | c | }", " \\hline", " Quality & Mechanical Turk users & Experienced users\\\\", " \\hline", " Native English Speaker & Yes (20) & Yes (1) No (19) \\\\", " Fluent in English & Yes (20) & Yes (20) \\\\", " Age & 21-40 (17) 41-60 (3) & 21-25 (8) 26-30 (7) 31-35 (4) 41-45 (1)\\\\", " Gender & Male (14) Female (6) & Male (17) Female (3)\\\\", " Highest Education & High School (10) Bachelor (10) & Bachelor (9) Master (6) Ph.D. (5) \\\\", " \\hline", " \\end{tabular}", " \\label{table:amt_pop}", "\\end{center}", "\\end{table}", " ", " ", "Table~\\ref{table:openNMT-py_commands} shows a listing of the openNMT-py commands we used to create our NMT model and to generate fake reviews.", " ", "\\begin{table}[t]", "\\caption{Listing of used openNMT-py commands.}", "\\begin{center}", " \\begin{tabular}{ | l | l | }", " \\hline", " Phase & Bash command \\\\", " \\hline", " Preprocessing & \\begin{lstlisting}[language=bash]", "python preprocess.py -train_src context-train.txt", "-train_tgt reviews-train.txt -valid_src context-val.txt", "-valid_tgt reviews-val.txt -save_data model", "-lower -tgt_words_min_frequency 10", "\\end{lstlisting}", " \\\\ & \\\\", " Training & \\begin{lstlisting}[language=bash]", "python train.py -data model -save_model model -epochs 8", "-gpuid 0 -learning_rate_decay 0.5 -optim adam", "-learning_rate 0.001 -start_decay_at 3\\end{lstlisting}", " \\\\ & \\\\", " Generation & \\begin{lstlisting}[language=bash]", "python translate.py -model model_acc_35.54_ppl_25.68_e8.pt", "-src context-tst.txt -output pred-e8.txt -replace_unk", "-verbose -max_length 50 -gpu 0", " \\end{lstlisting} \\\\", " \\hline", " \\end{tabular}", " \\label{table:openNMT-py_commands}", "\\end{center}", "\\end{table}", " ", " ", "Table~\\ref{table:MTurk_sub} shows the classification performance of Amazon Mechanical Turkers, separated across different categories of NMT-Fake reviews. The category with best performance ($b=0.3, \\lambda=-5$) is denoted as NMT-Fake*.", " ", "\\begin{table}[b]", "\\caption{MTurk study subclass classification reports. Classes are imbalanced in ratio 1:6. Random predictions are$p_\\mathrm{human} = 86\\%$and$p_\\mathrm{machine} = 14\\%$, with$r_\\mathrm{human} = r_\\mathrm{machine} = 50\\%$. Class-averaged F-scores for random predictions are$42\\%$.}", "\\begin{center}", " \\begin{tabular}{ | c || c |c |c | c | }", " \\hline", "$(b=0.3, \\lambda = -3)$& Precision & Recall & F-score & Support \\\\ \\hline", " Human & 89\\% & 63\\% & 73\\% & 994\\\\", " NMT-Fake & 15\\% & 45\\% & 22\\% & 146 \\\\", " \\hline", " \\hline", "$(b=0.3, \\lambda = -5)$& Precision & Recall & F-score & Support \\\\ \\hline", " Human & 86\\% & 63\\% & 73\\% & 994\\\\", " NMT-Fake* & 16\\% & 40\\% & 23\\% & 171 \\\\", " \\hline", " \\hline", "$(b=0.5, \\lambda = -4)$& Precision & Recall & F-score & Support \\\\ \\hline", " Human & 88\\% & 63\\% & 73\\% & 994\\\\", " NMT-Fake & 21\\% & 55\\% & 30\\% & 181 \\\\", " \\hline", " \\hline", "$(b=0.7, \\lambda = -3)$& Precision & Recall & F-score & Support \\\\ \\hline", " Human & 88\\% & 63\\% & 73\\% & 994\\\\", " NMT-Fake & 19\\% & 50\\% & 27\\% & 170 \\\\", " \\hline", " \\hline", "$(b=0.7, \\lambda = -5)$& Precision & Recall & F-score & Support \\\\ \\hline", " Human & 89\\% & 63\\% & 74\\% & 994\\\\", " NMT-Fake & 21\\% & 57\\% & 31\\% & 174 \\\\", " \\hline", " \\hline", "$(b=0.9, \\lambda = -4)$& Precision & Recall & F-score & Support \\\\ \\hline", " Human & 88\\% & 63\\% & 73\\% & 994\\\\", " NMT-Fake & 18\\% & 50\\% & 27\\% & 164 \\\\", " \\hline", " \\end{tabular}", " \\label{table:MTurk_sub}", "\\end{center}", "\\end{table}", " ", "Figure~\\ref{fig:screenshot} shows screenshots of the first two pages of our user study with experienced participants.", " ", "\\begin{figure}[ht]", "\\centering", "\\includegraphics[width=1.\\columnwidth]{figures/screenshot_7-3.png}", "\\caption{", "Screenshots of the first two pages in the user study. Example 1 is a NMT-Fake* review, the rest are human-written.", "}", "\\label{fig:screenshot}", "\\end{figure}", " ", "Table~\\ref{table:features_adaboost} shows the features used to detect NMT-Fake reviews using the AdaBoost classifier.", " ", "\\begin{table}", "\\caption{Features used in NMT-Fake review detector.}", "\\begin{center}", " \\begin{tabular}{ | l | c | }", " \\hline", " Feature type & Number of features \\\\ \\hline", " \\hline", " Readability features & 13 \\\\ \\hline", " Unique POS tags &$~20$\\\\ \\hline", " Word unigrams & 22,831 \\\\ \\hline", " 1/2/3/4-grams of simple part-of-speech tags & 54,240 \\\\ \\hline", " 1/2/3-grams of detailed part-of-speech tags & 112,944 \\\\ \\hline", " 1/2/3-grams of syntactic dependency tags & 93,195 \\\\ \\hline", " \\end{tabular}", " \\label{table:features_adaboost}", "\\end{center}", "\\end{table}", " ", "\\end{document}", "" ] ] } { "question": [ "Which dataset do they use a starting point in generating fake reviews?", "Do they use a pretrained NMT model to help generating reviews?", "How does using NMT ensure generated reviews stay on topic?", "What kind of model do they use for detection?", "Does their detection tool work better than human detection?", "How many reviews in total (both generated and true) do they evaluate on Amazon Mechanical Turk?" ], "question_id": [ "1a43df221a567869964ad3b275de30af2ac35598", "98b11f70239ef0e22511a3ecf6e413ecb726f954", "d4d771bcb59bab4f3eb9026cda7d182eb582027d", "12f1919a3e8ca460b931c6cacc268a926399dff4", "cd1034c183edf630018f47ff70b48d74d2bb1649", "bd9930a613dd36646e2fc016b6eb21ab34c77621" ], "nlp_background": [ "infinity", "infinity", "infinity", "infinity", "infinity", "infinity" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar" ], "paper_read": [ "no", "no", "no", "no", "no", "no" ], "search_query": [ "", "", "", "", "", "" ], "question_writer": [ "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [ "the Yelp Challenge dataset" ], "yes_no": null, "free_form_answer": "", "evidence": [ "We use the Yelp Challenge dataset BIBREF2 for our fake review generation. The dataset (Aug 2017) contains 2.9 million 1 –5 star restaurant reviews. We treat all reviews as genuine human-written reviews for the purpose of this work, since wide-scale deployment of machine-generated review attacks are not yet reported (Sep 2017) BIBREF19 . As preprocessing, we remove non-printable (non-ASCII) characters and excessive white-space. We separate punctuation from words. We reserve 15,000 reviews for validation and 3,000 for testing, and the rest we use for training. NMT models require a parallel corpus of source and target sentences, i.e. a large set of (source, target)-pairs. We set up a parallel corpus by constructing (context, review)-pairs from the dataset. Next, we describe how we created our input context." ], "highlighted_evidence": [ "We use the Yelp Challenge dataset BIBREF2 for our fake review generation. " ] }, { "unanswerable": false, "extractive_spans": [ "Yelp Challenge dataset BIBREF2" ], "yes_no": null, "free_form_answer": "", "evidence": [ "We use the Yelp Challenge dataset BIBREF2 for our fake review generation. The dataset (Aug 2017) contains 2.9 million 1 –5 star restaurant reviews. We treat all reviews as genuine human-written reviews for the purpose of this work, since wide-scale deployment of machine-generated review attacks are not yet reported (Sep 2017) BIBREF19 . As preprocessing, we remove non-printable (non-ASCII) characters and excessive white-space. We separate punctuation from words. We reserve 15,000 reviews for validation and 3,000 for testing, and the rest we use for training. NMT models require a parallel corpus of source and target sentences, i.e. a large set of (source, target)-pairs. We set up a parallel corpus by constructing (context, review)-pairs from the dataset. Next, we describe how we created our input context." ], "highlighted_evidence": [ "We use the Yelp Challenge dataset BIBREF2 for our fake review generation." ] } ], "annotation_id": [ "a6c6f62389926ad2d6b21e8b3bbd5ee58e32ccd2", "c3b77db9a4c6f4e898460912ef2da68a2e55ba57" ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4", "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": false, "free_form_answer": "", "evidence": [], "highlighted_evidence": [] }, { "unanswerable": false, "extractive_spans": [], "yes_no": false, "free_form_answer": "", "evidence": [], "highlighted_evidence": [] } ], "annotation_id": [ "5c25c0877f37421f694b29c367cc344a9ce048c1", "bbc27527be0e66597f3d157df615c449cd3ce805" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] }, { "answer": [ { "unanswerable": true, "extractive_spans": [], "yes_no": null, "free_form_answer": "", "evidence": [], "highlighted_evidence": [] } ], "annotation_id": [ "855e5c21e865eb289ae9bfd97d81665ebd1f1e0f" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "AdaBoost-based classifier" ], "yes_no": null, "free_form_answer": "", "evidence": [ "We developed an AdaBoost-based classifier to detect our new fake reviews, consisting of 200 shallow decision trees (depth 2). The features we used are recorded in Table~\\ref{table:features_adaboost} (Appendix)." ], "highlighted_evidence": [ "We developed an AdaBoost-based classifier to detect our new fake reviews, consisting of 200 shallow decision trees (depth 2)." ] } ], "annotation_id": [ "72e4b5a0cedcbc980f7040caf4347c1079a5c474" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": true, "free_form_answer": "", "evidence": [ "We noticed some variation in the detection of different fake review categories. The respondents in our MTurk survey had most difficulties recognizing reviews of category$(b=0.3, \\lambda=-5)$, where true positive rate was$40.4\\%$, while the true negative rate of the real class was$62.7\\%$. The precision were$16\\%$and$86\\%$, respectively. The class-averaged F-score is$47.6\\%$, which is close to random. Detailed classification reports are shown in Table~\\ref{table:MTurk_sub} in Appendix. Our MTurk-study shows that \\emph{our NMT-Fake reviews pose a significant threat to review systems}, since \\emph{ordinary native English-speakers have very big difficulties in separating real reviews from fake reviews}. We use the review category$(b=0.3, \\lambda=-5)$for future user tests in this paper, since MTurk participants had most difficulties detecting these reviews. We refer to this category as NMT-Fake* in this paper.", "Figure~\\ref{fig:adaboost_matrix_b_lambda} shows our AdaBoost classifier's class-averaged F-score at detecting different kind of fake reviews. The classifier is very effective in detecting reviews that humans have difficulties detecting. For example, the fake reviews MTurk users had most difficulty detecting ($b=0.3, \\lambda=-5$) are detected with an excellent 97\\% F-score." ], "highlighted_evidence": [ "The respondents in our MTurk survey had most difficulties recognizing reviews of category$(b=0.3, \\lambda=-5)$, where true positive rate was$40.4\\%$, while the true negative rate of the real class was$62.7\\%$. The precision were$16\\%$and$86\\%$, respectively. The class-averaged F-score is$47.6\\%$, which is close to random. Detailed classification reports are shown in Table~\\ref{table:MTurk_sub} in Appendix. Our MTurk-study shows that \\emph{our NMT-Fake reviews pose a significant threat to review systems}, since \\emph{ordinary native English-speakers have very big difficulties in separating real reviews from fake reviews}. We use the review category$(b=0.3, \\lambda=-5)$for future user tests in this paper, since MTurk participants had most difficulties detecting these reviews. We refer to this category as NMT-Fake* in this paper.", "The classifier is very effective in detecting reviews that humans have difficulties detecting. For example, the fake reviews MTurk users had most difficulty detecting ($b=0.3, \\lambda=-5$) are detected with an excellent 97\\% F-score." ] } ], "annotation_id": [ "06f2c923d36116318aab2f7cb82d418020654d74" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "1,006 fake reviews and 994 real reviews" ], "yes_no": null, "free_form_answer": "", "evidence": [ "We first investigated overall detection of any NMT-Fake reviews (1,006 fake reviews and 994 real reviews). We found that the participants had big difficulties in detecting our fake reviews. In average, the reviews were detected with class-averaged \\emph{F-score of only 56\\%}, with 53\\% F-score for fake review detection and 59\\% F-score for real review detection. The results are very close to \\emph{random detection}, where precision, recall and F-score would each be 50\\%. Results are recorded in Table~\\ref{table:MTurk_super}. Overall, the fake review generation is very successful, since human detection rate across categories is close to random." ], "highlighted_evidence": [ "We first investigated overall detection of any NMT-Fake reviews (1,006 fake reviews and 994 real reviews)." ] } ], "annotation_id": [ "1249dad8a00c798589671ed2271454f6871fadad" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ] } 1907.05664 Saliency Maps Generation for Automatic Text Summarization Saliency map generation techniques are at the forefront of explainable AI literature for a broad range of machine learning applications. Our goal is to question the limits of these approaches on more complex tasks. In this paper we apply Layer-Wise Relevance Propagation (LRP) to a sequence-to-sequence attention model trained on a text summarization dataset. We obtain unexpected saliency maps and discuss the rightfulness of these"explanations". We argue that we need a quantitative way of testing the counterfactual case to judge the truthfulness of the saliency maps. We suggest a protocol to check the validity of the importance attributed to the input and show that the saliency maps obtained sometimes capture the real use of the input features by the network, and sometimes do not. We use this example to discuss how careful we need to be when accepting them as explanation. { "section_name": [ "Introduction", "The Task and the Model", "Dataset and Training Task", "The Model", "Obtained Summaries", "Layer-Wise Relevance Propagation", "Mathematical Description", "Generation of the Saliency Maps", "Experimental results", "First Observations", "Validating the Attributions", "Conclusion" ], "paragraphs": [ [ "Ever since the LIME algorithm BIBREF0 , \"explanation\" techniques focusing on finding the importance of input features in regard of a specific prediction have soared and we now have many ways of finding saliency maps (also called heat-maps because of the way we like to visualize them). We are interested in this paper by the use of such a technique in an extreme task that highlights questions about the validity and evaluation of the approach. We would like to first set the vocabulary we will use. We agree that saliency maps are not explanations in themselves and that they are more similar to attribution, which is only one part of the human explanation process BIBREF1 . We will prefer to call this importance mapping of the input an attribution rather than an explanation. We will talk about the importance of the input relevance score in regard to the model's computation and not make allusion to any human understanding of the model as a result.", "There exist multiple ways to generate saliency maps over the input for non-linear classifiers BIBREF2 , BIBREF3 , BIBREF4 . We refer the reader to BIBREF5 for a survey of explainable AI in general. We use in this paper Layer-Wise Relevance Propagation (LRP) BIBREF2 which aims at redistributing the value of the classifying function on the input to obtain the importance attribution. It was first created to “explain\" the classification of neural networks on image recognition tasks. It was later successfully applied to text using convolutional neural networks (CNN) BIBREF6 and then Long-Short Term Memory (LSTM) networks for sentiment analysis BIBREF7 .", "Our goal in this paper is to test the limits of the use of such a technique for more complex tasks, where the notion of input importance might not be as simple as in topic classification or sentiment analysis. We changed from a classification task to a generative task and chose a more complex one than text translation (in which we can easily find a word to word correspondence/importance between input and output). We chose text summarization. We consider abstractive and informative text summarization, meaning that we write a summary “in our own words\" and retain the important information of the original text. We refer the reader to BIBREF8 for more details on the task and the different variants that exist. Since the success of deep sequence-to-sequence models for text translation BIBREF9 , the same approaches have been applied to text summarization tasks BIBREF10 , BIBREF11 , BIBREF12 which use architectures on which we can apply LRP.", "We obtain one saliency map for each word in the generated summaries, supposed to represent the use of the input features for each element of the output sequence. We observe that all the saliency maps for a text are nearly identical and decorrelated with the attention distribution. We propose a way to check their validity by creating what could be seen as a counterfactual experiment from a synthesis of the saliency maps, using the same technique as in Arras et al. Arras2017. We show that in some but not all cases they help identify the important input features and that we need to rigorously check importance attributions before trusting them, regardless of whether or not the mapping “makes sense\" to us. We finally argue that in the process of identifying the important input features, verifying the saliency maps is as important as the generation step, if not more." ], [ "We present in this section the baseline model from See et al. See2017 trained on the CNN/Daily Mail dataset. We reproduce the results from See et al. See2017 to then apply LRP on it." ], [ "The CNN/Daily mail dataset BIBREF12 is a text summarization dataset adapted from the Deepmind question-answering dataset BIBREF13 . It contains around three hundred thousand news articles coupled with summaries of about three sentences. These summaries are in fact “highlights\" of the articles provided by the media themselves. Articles have an average length of 780 words and the summaries of 50 words. We had 287 000 training pairs and 11 500 test pairs. Similarly to See et al. See2017, we limit during training and prediction the input text to 400 words and generate summaries of 200 words. We pad the shorter texts using an UNKNOWN token and truncate the longer texts. We embed the texts and summaries using a vocabulary of size 50 000, thus recreating the same parameters as See et al. See2017." ], [ "The baseline model is a deep sequence-to-sequence encoder/decoder model with attention. The encoder is a bidirectional Long-Short Term Memory(LSTM) cell BIBREF14 and the decoder a single LSTM cell with attention mechanism. The attention mechanism is computed as in BIBREF9 and we use a greedy search for decoding. We train end-to-end including the words embeddings. The embedding size used is of 128 and the hidden state size of the LSTM cells is of 254." ], [ "We train the 21 350 992 parameters of the network for about 60 epochs until we achieve results that are qualitatively equivalent to the results of See et al. See2017. We obtain summaries that are broadly relevant to the text but do not match the target summaries very well. We observe the same problems such as wrong reproduction of factual details, replacing rare words with more common alternatives or repeating non-sense after the third sentence. We can see in Figure 1 an example of summary obtained compared to the target one.", "The “summaries\" we generate are far from being valid summaries of the information in the texts but are sufficient to look at the attribution that LRP will give us. They pick up the general subject of the original text." ], [ "We present in this section the Layer-Wise Relevance Propagation (LRP) BIBREF2 technique that we used to attribute importance to the input features, together with how we adapted it to our model and how we generated the saliency maps. LRP redistributes the output of the model from the output layer to the input by transmitting information backwards through the layers. We call this propagated backwards importance the relevance. LRP has the particularity to attribute negative and positive relevance: a positive relevance is supposed to represent evidence that led to the classifier's result while negative relevance represents evidence that participated negatively in the prediction." ], [ "We initialize the relevance of the output layer to the value of the predicted class before softmax and we then describe locally the propagation backwards of the relevance from layer to layer. For normal neural network layers we use the form of LRP with epsilon stabilizer BIBREF2 . We write down$R_{i\\leftarrow j}^{(l, l+1)}$the relevance received by the neuron$i$of layer$l$from the neuron$j$of layer$l+1$: ", "$$\\begin{split}\n\nR_{i\\leftarrow j}^{(l, l+1)} &= \\dfrac{w_{i\\rightarrow j}^{l,l+1}\\textbf {z}^l_i + \\dfrac{\\epsilon \\textrm { sign}(\\textbf {z}^{l+1}_j) + \\textbf {b}^{l+1}_j}{D_l}}{\\textbf {z}^{l+1}_j + \\epsilon * \\textrm { sign}(\\textbf {z}^{l+1}_j)} * R_j^{l+1} \\\\\n\\end{split}$$ (Eq. 7) ", "where$w_{i\\rightarrow j}^{l,l+1}$is the network's weight parameter set during training,$\\textbf {b}^{l+1}_j$is the bias for neuron$j$of layer$l+1$,$\\textbf {z}^{l}_i$is the activation of neuron$i$on layer$l$,$\\epsilon $is the stabilizing term set to 0.00001 and$D_l$is the dimension of the$l$-th layer.", "The relevance of a neuron is then computed as the sum of the relevance he received from the above layer(s).", "For LSTM cells we use the method from Arras et al.Arras2017 to solve the problem posed by the element-wise multiplications of vectors. Arras et al. noted that when such computation happened inside an LSTM cell, it always involved a “gate\" vector and another vector containing information. The gate vector containing only value between 0 and 1 is essentially filtering the second vector to allow the passing of “relevant\" information. Considering this, when we propagate relevance through an element-wise multiplication operation, we give all the upper-layer's relevance to the “information\" vector and none to the “gate\" vector." ], [ "We use the same method to transmit relevance through the attention mechanism back to the encoder because Bahdanau's attention BIBREF9 uses element-wise multiplications as well. We depict in Figure 2 the transmission end-to-end from the output layer to the input through the decoder, attention mechanism and then the bidirectional encoder. We then sum up the relevance on the word embedding to get the token's relevance as Arras et al. Arras2017.", "The way we generate saliency maps differs a bit from the usual context in which LRP is used as we essentially don't have one classification, but 200 (one for each word in the summary). We generate a relevance attribution for the 50 first words of the generated summary as after this point they often repeat themselves.", "This means that for each text we obtain 50 different saliency maps, each one supposed to represent the relevance of the input for a specific generated word in the summary." ], [ "In this section, we present our results from extracting attributions from the sequence-to-sequence model trained for abstractive text summarization. We first have to discuss the difference between the 50 different saliency maps we obtain and then we propose a protocol to validate the mappings." ], [ "The first observation that is made is that for one text, the 50 saliency maps are almost identical. Indeed each mapping highlights mainly the same input words with only slight variations of importance. We can see in Figure 3 an example of two nearly identical attributions for two distant and unrelated words of the summary. The saliency map generated using LRP is also uncorrelated with the attention distribution that participated in the generation of the output word. The attention distribution changes drastically between the words in the generated summary while not impacting significantly the attribution over the input text. We deleted in an experiment the relevance propagated through the attention mechanism to the encoder and didn't observe much changes in the saliency map.", "It can be seen as evidence that using the attention distribution as an “explanation\" of the prediction can be misleading. It is not the only information received by the decoder and the importance it “allocates\" to this attention state might be very low. What seems to happen in this application is that most of the information used is transmitted from the encoder to the decoder and the attention mechanism at each decoding step just changes marginally how it is used. Quantifying the difference between attention distribution and saliency map across multiple tasks is a possible future work.", "The second observation we can make is that the saliency map doesn't seem to highlight the right things in the input for the summary it generates. The saliency maps on Figure 3 correspond to the summary from Figure 1 , and we don't see the word “video\" highlighted in the input text, which seems to be important for the output.", "This allows us to question how good the saliency maps are in the sense that we question how well they actually represent the network's use of the input features. We will call that truthfulness of the attribution in regard to the computation, meaning that an attribution is truthful in regard to the computation if it actually highlights the important input features that the network attended to during prediction. We proceed to measure the truthfulness of the attributions by validating them quantitatively." ], [ "We propose to validate the saliency maps in a similar way as Arras et al. Arras2017 by incrementally deleting “important\" words from the input text and observe the change in the resulting generated summaries.", "We first define what “important\" (and “unimportant\") input words mean across the 50 saliency maps per texts. Relevance transmitted by LRP being positive or negative, we average the absolute value of the relevance across the saliency maps to obtain one ranking of the most “relevant\" words. The idea is that input words with negative relevance have an impact on the resulting generated word, even if it is not participating positively, while a word with a relevance close to zero should not be important at all. We did however also try with different methods, like averaging the raw relevance or averaging a scaled absolute value where negative relevance is scaled down by a constant factor. The absolute value average seemed to deliver the best results.", "We delete incrementally the important words (words with the highest average) in the input and compared it to the control experiment that consists of deleting the least important word and compare the degradation of the resulting summaries. We obtain mitigated results: for some texts, we observe a quick degradation when deleting important words which are not observed when deleting unimportant words (see Figure 4 ), but for other test examples we don't observe a significant difference between the two settings (see Figure 5 ).", "One might argue that the second summary in Figure 5 is better than the first one as it makes better sentences but as the model generates inaccurate summaries, we do not wish to make such a statement.", "This however allows us to say that the attribution generated for the text at the origin of the summaries in Figure 4 are truthful in regard to the network's computation and we may use it for further studies of the example, whereas for the text at the origin of Figure 5 we shouldn't draw any further conclusions from the attribution generated.", "One interesting point is that one saliency map didn't look “better\" than the other, meaning that there is no apparent way of determining their truthfulness in regard of the computation without doing a quantitative validation. This brings us to believe that even in simpler tasks, the saliency maps might make sense to us (for example highlighting the animal in an image classification task), without actually representing what the network really attended too, or in what way.", "We defined without saying it the counterfactual case in our experiment: “Would the important words in the input be deleted, we would have a different summary\". Such counterfactuals are however more difficult to define for image classification for example, where it could be applying a mask over an image, or just filtering a colour or a pattern. We believe that defining a counterfactual and testing it allows us to measure and evaluate the truthfulness of the attributions and thus weight how much we can trust them." ], [ "In this work, we have implemented and applied LRP to a sequence-to-sequence model trained on a more complex task than usual: text summarization. We used previous work to solve the difficulties posed by LRP in LSTM cells and adapted the same technique for Bahdanau et al. Bahdanau2014 attention mechanism.", "We observed a peculiar behaviour of the saliency maps for the words in the output summary: they are almost all identical and seem uncorrelated with the attention distribution. We then proceeded to validate our attributions by averaging the absolute value of the relevance across the saliency maps. We obtain a ranking of the word from the most important to the least important and proceeded to delete one or another.", "We showed that in some cases the saliency maps are truthful to the network's computation, meaning that they do highlight the input features that the network focused on. But we also showed that in some cases the saliency maps seem to not capture the important input features. This brought us to discuss the fact that these attributions are not sufficient by themselves, and that we need to define the counter-factual case and test it to measure how truthful the saliency maps are.", "Future work would look into the saliency maps generated by applying LRP to pointer-generator networks and compare to our current results as well as mathematically justifying the average that we did when validating our saliency maps. Some additional work is also needed on the validation of the saliency maps with counterfactual tests. The exploitation and evaluation of saliency map are a very important step and should not be overlooked." ] ] } { "question": [ "Which baselines did they compare?", "How many attention layers are there in their model?", "Is the explanation from saliency map correct?" ], "question_id": [ "6e2ad9ad88cceabb6977222f5e090ece36aa84ea", "aacb0b97aed6fc6a8b471b8c2e5c4ddb60988bf5", "710c1f8d4c137c8dad9972f5ceacdbf8004db208" ], "nlp_background": [ "five", "five", "five" ], "topic_background": [ "familiar", "familiar", "familiar" ], "paper_read": [ "no", "no", "no" ], "search_query": [ "saliency", "saliency", "saliency" ], "question_writer": [ "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [ "The baseline model is a deep sequence-to-sequence encoder/decoder model with attention. The encoder is a bidirectional Long-Short Term Memory(LSTM) cell BIBREF14 and the decoder a single LSTM cell with attention mechanism. The attention mechanism is computed as in BIBREF9 and we use a greedy search for decoding. We train end-to-end including the words embeddings. The embedding size used is of 128 and the hidden state size of the LSTM cells is of 254." ], "yes_no": null, "free_form_answer": "", "evidence": [ "We present in this section the baseline model from See et al. See2017 trained on the CNN/Daily Mail dataset. We reproduce the results from See et al. See2017 to then apply LRP on it.", "The baseline model is a deep sequence-to-sequence encoder/decoder model with attention. The encoder is a bidirectional Long-Short Term Memory(LSTM) cell BIBREF14 and the decoder a single LSTM cell with attention mechanism. The attention mechanism is computed as in BIBREF9 and we use a greedy search for decoding. We train end-to-end including the words embeddings. The embedding size used is of 128 and the hidden state size of the LSTM cells is of 254." ], "highlighted_evidence": [ "We present in this section the baseline model from See et al. See2017 trained on the CNN/Daily Mail dataset.", "The baseline model is a deep sequence-to-sequence encoder/decoder model with attention. The encoder is a bidirectional Long-Short Term Memory(LSTM) cell BIBREF14 and the decoder a single LSTM cell with attention mechanism. The attention mechanism is computed as in BIBREF9 and we use a greedy search for decoding. We train end-to-end including the words embeddings. The embedding size used is of 128 and the hidden state size of the LSTM cells is of 254." ] }, { "unanswerable": false, "extractive_spans": [ "The baseline model is a deep sequence-to-sequence encoder/decoder model with attention. The encoder is a bidirectional Long-Short Term Memory(LSTM) cell BIBREF14 and the decoder a single LSTM cell with attention mechanism. The attention mechanism is computed as in BIBREF9 and we use a greedy search for decoding. We train end-to-end including the words embeddings. The embedding size used is of 128 and the hidden state size of the LSTM cells is of 254." ], "yes_no": null, "free_form_answer": "", "evidence": [ "The baseline model is a deep sequence-to-sequence encoder/decoder model with attention. The encoder is a bidirectional Long-Short Term Memory(LSTM) cell BIBREF14 and the decoder a single LSTM cell with attention mechanism. The attention mechanism is computed as in BIBREF9 and we use a greedy search for decoding. We train end-to-end including the words embeddings. The embedding size used is of 128 and the hidden state size of the LSTM cells is of 254." ], "highlighted_evidence": [ "The baseline model is a deep sequence-to-sequence encoder/decoder model with attention. The encoder is a bidirectional Long-Short Term Memory(LSTM) cell BIBREF14 and the decoder a single LSTM cell with attention mechanism. The attention mechanism is computed as in BIBREF9 and we use a greedy search for decoding. We train end-to-end including the words embeddings. The embedding size used is of 128 and the hidden state size of the LSTM cells is of 254." ] } ], "annotation_id": [ "0850b7c0555801d057062480de6bb88adb81cae3", "93216bca45711b73083372495d9a2667736fbac9" ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "101dbdd2108b3e676061cb693826f0959b47891b" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "one", "evidence": [ "The baseline model is a deep sequence-to-sequence encoder/decoder model with attention. The encoder is a bidirectional Long-Short Term Memory(LSTM) cell BIBREF14 and the decoder a single LSTM cell with attention mechanism. The attention mechanism is computed as in BIBREF9 and we use a greedy search for decoding. We train end-to-end including the words embeddings. The embedding size used is of 128 and the hidden state size of the LSTM cells is of 254." ], "highlighted_evidence": [ "The encoder is a bidirectional Long-Short Term Memory(LSTM) cell BIBREF14 and the decoder a single LSTM cell with attention mechanism. " ] } ], "annotation_id": [ "e0ca6b95c1c051723007955ce6804bd29f325379" ], "worker_id": [ "101dbdd2108b3e676061cb693826f0959b47891b" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": false, "free_form_answer": "", "evidence": [ "We showed that in some cases the saliency maps are truthful to the network's computation, meaning that they do highlight the input features that the network focused on. But we also showed that in some cases the saliency maps seem to not capture the important input features. This brought us to discuss the fact that these attributions are not sufficient by themselves, and that we need to define the counter-factual case and test it to measure how truthful the saliency maps are.", "The second observation we can make is that the saliency map doesn't seem to highlight the right things in the input for the summary it generates. The saliency maps on Figure 3 correspond to the summary from Figure 1 , and we don't see the word “video\" highlighted in the input text, which seems to be important for the output." ], "highlighted_evidence": [ "But we also showed that in some cases the saliency maps seem to not capture the important input features. ", "The second observation we can make is that the saliency map doesn't seem to highlight the right things in the input for the summary it generates" ] } ], "annotation_id": [ "79e54a7b9ba9cde5813c3434e64a02d722f13b23" ], "worker_id": [ "101dbdd2108b3e676061cb693826f0959b47891b" ] } ] } 1910.14497 Probabilistic Bias Mitigation in Word Embeddings It has been shown that word embeddings derived from large corpora tend to incorporate biases present in their training data. Various methods for mitigating these biases have been proposed, but recent work has demonstrated that these methods hide but fail to truly remove the biases, which can still be observed in word nearest-neighbor statistics. In this work we propose a probabilistic view of word embedding bias. We leverage this framework to present a novel method for mitigating bias which relies on probabilistic observations to yield a more robust bias mitigation algorithm. We demonstrate that this method effectively reduces bias according to three separate measures of bias while maintaining embedding quality across various popular benchmark semantic tasks { "section_name": [ "Introduction", "Background ::: Geometric Bias Mitigation", "Background ::: Geometric Bias Mitigation ::: WEAT", "Background ::: Geometric Bias Mitigation ::: RIPA", "Background ::: Geometric Bias Mitigation ::: Neighborhood Metric", "A Probabilistic Framework for Bias Mitigation", "A Probabilistic Framework for Bias Mitigation ::: Probabilistic Bias Mitigation", "A Probabilistic Framework for Bias Mitigation ::: Nearest Neighbor Bias Mitigation", "Experiments", "Discussion", "Discussion ::: Acknowledgements", "Experiment Notes", "Professions", "WEAT Word Sets" ], "paragraphs": [ [ "Word embeddings, or vector representations of words, are an important component of Natural Language Processing (NLP) models and necessary for many downstream tasks. However, word embeddings, including embeddings commonly deployed for public use, have been shown to exhibit unwanted societal stereotypes and biases, raising concerns about disparate impact on axes of gender, race, ethnicity, and religion BIBREF0, BIBREF1. The impact of this bias has manifested in a range of downstream tasks, ranging from autocomplete suggestions BIBREF2 to advertisement delivery BIBREF3, increasing the likelihood of amplifying harmful biases through the use of these models.", "The most well-established method thus far for mitigating bias relies on projecting target words onto a bias subspace (such as a gender subspace) and subtracting out the difference between the resulting distances BIBREF0. On the other hand, the most popular metric for measuring bias is the WEAT statistic BIBREF1, which compares the cosine similarities between groups of words. However, WEAT has been recently shown to overestimate bias as a result of implicitly relying on similar frequencies for the target words BIBREF4, and BIBREF5 demonstrated that evidence of bias can still be recovered after geometric bias mitigation by examining the neighborhood of a target word among socially-biased words.", "In response to this, we propose an alternative framework for bias mitigation in word embeddings that approaches this problem from a probabilistic perspective. The motivation for this approach is two-fold. First, most popular word embedding algorithms are probabilistic at their core – i.e., they are trained (explicitly or implicitly BIBREF6) to minimize some form of word co-occurrence probabilities. Thus, we argue that a framework for measuring and treating bias in these embeddings should take into account, in addition to their geometric aspect, their probabilistic nature too. On the other hand, the issue of bias has also been approached (albeit in different contexts) in the fairness literature, where various intuitive notions of equity such as equalized odds have been formalized through probabilistic criteria. By considering analogous criteria for the word embedding setting, we seek to draw connections between these two bodies of work.", "We present experiments on various bias mitigation benchmarks and show that our framework is comparable to state-of-the-art alternatives according to measures of geometric bias mitigation and that it performs far better according to measures of neighborhood bias. For fair comparison, we focus on mitigating a binary gender bias in pre-trained word embeddings using SGNS (skip-gram with negative-sampling), though we note that this framework and methods could be extended to other types of bias and word embedding algorithms." ], [ "Geometric bias mitigation uses the cosine distances between words to both measure and remove gender bias BIBREF0. This method implicitly defines bias as a geometric asymmetry between words when projected onto a subspace, such as the gender subspace constructed from a set of gender pairs such as$\\mathcal {P} = \\lbrace (he,she),(man,woman),(king,queen)...\\rbrace $. The projection of a vector$v$onto$B$(the subspace) is defined by$v_B = \\sum _{j=1}^{k} (v \\cdot b_j) b_j$where a subspace$B$is defined by k orthogonal unit vectors$B = {b_1,...,b_k}$." ], [ "The WEAT statistic BIBREF1 demonstrates the presence of biases in word embeddings with an effect size defined as the mean test statistic across the two word sets:", "Where$s$, the test statistic, is defined as:$s(w,A,B) = mean_{a \\in A} cos(w,a) - mean_{b \\in B} cos(w,a)$, and$X$,$Y$,$A$, and$B$are groups of words for which the association is measured. Possible values range from$-2$to 2 depending on the association of the words groups, and a value of zero indicates$X$and$Y$are equally associated with$A$and$B$. See BIBREF4 for further details on WEAT." ], [ "The RIPA (relational inner product association) metric was developed as an alternative to WEAT, with the critique that WEAT is likely to overestimate the bias of a target attribute BIBREF4. The RIPA metric formalizes the measure of bias used in geometric bias mitigation as the inner product association of a word vector$v$with respect to a relation vector$b$. The relation vector is constructed from the first principal component of the differences between gender word pairs. We report the absolute value of the RIPA metric as the value can be positive or negative according to the direction of the bias. A value of zero indicates a lack of bias, and the value is bound by$[-||w||,||w||]$." ], [ "The neighborhood bias metric proposed by BIBREF5 quantifies bias as the proportion of male socially-biased words among the$k$nearest socially-biased male and female neighboring words, whereby biased words are obtained by projecting neutral words onto a gender relation vector. As we only examine the target word among the 1000 most socially-biased words in the vocabulary (500 male and 500 female), a word’s bias is measured as the ratio of its neighborhood of socially-biased male and socially-biased female words, so that a value of 0.5 in this metric would indicate a perfectly unbiased word, and values closer to 0 and 1 indicate stronger bias." ], [ "Our objective here is to extend and complement the geometric notions of word embedding bias described in the previous section with an alternative, probabilistic, approach. Intuitively, we seek a notion of equality akin to that of demographic parity in the fairness literature, which requires that a decision or outcome be independent of a protected attribute such as gender. BIBREF7. Similarly, when considering a probabilistic definition of unbiased in word embeddings, we can consider the conditional probabilities of word pairs, ensuring for example that$p(doctor|man) \\approx p(doctor|woman)$, and can extend this probabilistic framework to include the neighborhood of a target word, addressing the potential pitfalls of geometric bias mitigation.", "Conveniently, most word embedding frameworks allow for immediate computation of the conditional probabilities$P(w|c)$. Here, we focus our attention on the Skip-Gram method with Negative Sampling (SGNS) of BIBREF8, although our framework can be equivalently instantiated for most other popular embedding methods, owing to their core similarities BIBREF6, BIBREF9. Leveraging this probabilistic nature, we construct a bias mitigation method in two steps, and examine each step as an independent method as well as the resulting composite method." ], [ "This component of our bias mitigation framework seeks to enforce that the probability of prediction or outcome cannot depend on a protected class such as gender. We can formalize this intuitive goal through a loss function that penalizes the discrepancy between the conditional probabilities of a target word (i.e., one that should not be affected by the protected attribute) conditioned on two words describing the protected attribute (e.g., man and woman in the case of gender). That is, for every target word we seek to minimize:", "where$\\mathcal {P} = \\lbrace (he,she),(man,woman),(king,queen), \\dots \\rbrace $is a set of word pairs characterizing the protected attribute, akin to that used in previous work BIBREF0.", "At this point, the specific form of the objective will depend on the type of word embeddings used. For our expample of SGNS, recall that this algorithm models the conditional probability of a target word given a context word as a function of the inner product of their representations. Though an exact method for calculating the conditional probability includes summing over conditional probability of all the words in the vocabulary, we can use the estimation of log conditional probability proposed by BIBREF8, i.e.,$ \\log p(w_O|w_I) \\approx \\log \\sigma ({v^{\\prime }_{wo}}^T v_{wI}) + \\sum _{i=1}^{k} [\\log {\\sigma ({{-v^{\\prime }_{wi}}^T v_{wI}})}] $." ], [ "Based on observations by BIBREF5, we extend our method to consider the composition of the neighborhood of socially-gendered words of a target word. We note that bias in a word embedding depends not only on the relationship between a target word and explicitly gendered words like man and woman, but also between a target word and socially-biased male or female words. Bolukbasi et al BIBREF0 proposed a method for eliminating this kind of indirect bias through geometric bias mitigation, but it is shown to be ineffective by the neighborhood metric BIBREF5.", "Instead, we extend our method of bias mitigation to account for this neighborhood effect. Specifically, we examine the conditional probabilities of a target word given the$k/2$nearest neighbors from the male socially-biased words as well as given the$k/2$female socially-biased words (in sorted order, from smallest to largest). The groups of socially-biased words are constructed as described in the neighborhood metric. If the word is unbiased according to the neighborhood metric, these probabilities should be comparable. We then use the following as our loss function:", "", "where$m$and$f$represent the male and female neighbors sorted by distance to the target word$t$(we use$L1$distance)." ], [ "We evaluate our framework on fastText embeddings trained on Wikipedia (2017), UMBC webbase corpus and statmt.org news dataset (16B tokens) BIBREF11. For simplicity, only the first 22000 words are used in all embeddings, though preliminary results indicate the findings extend to the full corpus. For our novel methods of mitigating bias, a shallow neural network is used to adjust the embedding. The single layer of the model is an embedding layer with weights initialized to those of the original embedding. For the composite method, these weights are initialized to those of the embedding after probabilistic bias mitigation. A batch of word indices is fed into the model, which are then embedded and for which a loss value is calculated, allowing back-propagation to adjust the embeddings. For each of the models, a fixed number of iterations is used to prevent overfitting, which can eventually hurt performance on the embedding benchmarks (See Figure FIGREF12). We evaluated the embedding after 1000 iterations, and stopped training if performance on a benchmark decreased significantly.", "We construct a list of candidate words to debias, taken from the words used in the WEAT gender bias statistics. Words in this list should be gender neutral, and are related to the topics of career, arts, science, math, family and professions (see appendix). We note that this list can easily be expanded to include a greater proportion of words in the corpus. For example, BIBREF4 suggested a method for identifying inappropriately gendered words using unsupervised learning.", "We compare this method of bias mitigation with the no bias mitigation (\"Orig\"), geometric bias mitigation (\"Geo\"), the two pieces of our method alone (\"Prob\" and \"KNN\") and the composite method (\"KNN+Prob\"). We note that the composite method performs reasonably well according the the RIPA metric, and much better than traditional geometric bias mitigation according to the neighborhood metric, without significant performance loss according to the accepted benchmarks. To our knowledge this is the first bias mitigation method to perform reasonably both on both metrics." ], [ "We proposed a simple method of bias mitigation based on this probabilistic notions of fairness, and showed that it leads to promising results in various benchmark bias mitigation tasks. Future work should include considering a more rigorous definition and non-binary of bias and experimenting with various embedding algorithms and network architectures." ], [ "The authors would like to thank Tommi Jaakkola for stimulating discussions during the initial stages of this work." ], [ "For Equation 4, as described in the original work, in regards to the k sample words$w_i$is drawn from the corpus using the Unigram distribution raised to the 3/4 power.", "For reference, the most male socially-biased words include words such as:’john’, ’jr’, ’mlb’, ’dick’, ’nfl’, ’cfl’, ’sgt’, ’abbot’, ’halfback’, ’jock’, ’mike’, ’joseph’,while the most female socially-biased words include words such as:’feminine’, ’marital’, ’tatiana’, ’pregnancy’, ’eva’, ’pageant’, ’distress’, ’cristina’, ’ida’, ’beauty’, ’sexuality’,’fertility’" ], [ "'accountant', 'acquaintance', 'actor', 'actress', 'administrator', 'adventurer', 'advocate', 'aide', 'alderman', 'ambassador', 'analyst', 'anthropologist', 'archaeologist', 'archbishop', 'architect', 'artist', 'assassin', 'astronaut', 'astronomer', 'athlete', 'attorney', 'author', 'baker', 'banker', 'barber', 'baron', 'barrister', 'bartender', 'biologist', 'bishop', 'bodyguard', 'boss', 'boxer', 'broadcaster', 'broker', 'businessman', 'butcher', 'butler', 'captain', 'caretaker', 'carpenter', 'cartoonist', 'cellist', 'chancellor', 'chaplain', 'character', 'chef', 'chemist', 'choreographer', 'cinematographer', 'citizen', 'cleric', 'clerk', 'coach', 'collector', 'colonel', 'columnist', 'comedian', 'comic', 'commander', 'commentator', 'commissioner', 'composer', 'conductor', 'confesses', 'congressman', 'constable', 'consultant', 'cop', 'correspondent', 'counselor', 'critic', 'crusader', 'curator', 'dad', 'dancer', 'dean', 'dentist', 'deputy', 'detective', 'diplomat', 'director', 'doctor', 'drummer', 'economist', 'editor', 'educator', 'employee', 'entertainer', 'entrepreneur', 'envoy', 'evangelist', 'farmer', 'filmmaker', 'financier', 'fisherman', 'footballer', 'foreman', 'gangster', 'gardener', 'geologist', 'goalkeeper', 'guitarist', 'headmaster', 'historian', 'hooker', 'illustrator', 'industrialist', 'inspector', 'instructor', 'inventor', 'investigator', 'journalist', 'judge', 'jurist', 'landlord', 'lawyer', 'lecturer', 'legislator', 'librarian', 'lieutenant', 'lyricist', 'maestro', 'magician', 'magistrate', 'maid', 'manager', 'marshal', 'mathematician', 'mechanic', 'midfielder', 'minister', 'missionary', 'monk', 'musician', 'nanny', 'narrator', 'naturalist', 'novelist', 'nun', 'nurse', 'observer', 'officer', 'organist', 'painter', 'pastor', 'performer', 'philanthropist', 'philosopher', 'photographer', 'physician', 'physicist', 'pianist', 'planner', 'playwright', 'poet', 'policeman', 'politician', 'preacher', 'president', 'priest', 'principal', 'prisoner', 'professor', 'programmer', 'promoter', 'proprietor', 'prosecutor', 'protagonist', 'provost', 'psychiatrist', 'psychologist', 'rabbi', 'ranger', 'researcher', 'sailor', 'saint', 'salesman', 'saxophonist', 'scholar', 'scientist', 'screenwriter', 'sculptor', 'secretary', 'senator', 'sergeant', 'servant', 'singer', 'skipper', 'sociologist', 'soldier', 'solicitor', 'soloist', 'sportsman', 'statesman', 'steward', 'student', 'substitute', 'superintendent', 'surgeon', 'surveyor', 'swimmer', 'teacher', 'technician', 'teenager', 'therapist', 'trader', 'treasurer', 'trooper', 'trumpeter', 'tutor', 'tycoon', 'violinist', 'vocalist', 'waiter', 'waitress', 'warden', 'warrior', 'worker', 'wrestler', 'writer'" ], [ "Words used for WEAT statistic, consisting of baseline bias tests and gender bias tests in the format X vs Y / A vs B", "Flowers vs Insects / Pleasant vs Unpleasant", "X: \"aster\", \"clover\", \"hyacinth\", \"marigold\", \"poppy\", \"azalea\", \"crocus\", \"iris\", \"orchid\", \"rose\", \"bluebell\", \"daffodil\", \"lilac\", \"pansy\", \"tulip\", \"buttercup\", \"daisy\", \"lily\", \"peony\", \"violet\", \"carnation\", \"gladiola\", \"magnolia\", \"petunia\", \"zinnia\"", "Y: \"ant\", \"caterpillar\", \"flea\", \"locust\", \"spider\", \"bedbug\", \"centipede\", \"fly\", \"maggot\", \"tarantula\", \"bee\", \"cockroach\", \"gnat\", \"mosquito\", \"termite\", \"beetle\", \"cricket\", \"hornet\", \"moth\", \"wasp\", \"blackfly\", \"dragonfly\", \"horsefly\", \"roach\", \"weevil\"", "A: \"caress\", \"freedom\", \"health\", \"love\", \"peace\", \"cheer\", \"friend\", \"heaven\", \"loyal\", \"pleasure\", \"diamond\", \"gentle\", \"honest\", \"lucky\", \"rainbow\", \"diploma\", \"gift\", \"honor\", \"miracle\", \"sunrise\", \"family\", \"happy\", \"laughter\", \"paradise\", \"vacation\"", "B: \"abuse\", \"crash\", \"filth\", \"murder\", \"sickness\", \"accident\", \"death\", \"grief\", \"poison\", \"stink\", \"assault\", \"disaster\", \"hatred\", \"pollute\", \"tragedy\", \"divorce\", \"jail\", \"poverty\", \"ugly\", \"cancer\", \"kill\", \"rotten\", \"vomit\", \"agony\", \"prison\"", "Instruments vs Weapons / Pleasant vs Unpleasant:", "X: \"bagpipe\", \"cello\", \"guitar\", \"lute\", \"trombone\", \"banjo\", \"clarinet\", \"harmonica\", \"mandolin\", \"trumpet\", \"bassoon\", \"drum\", \"harp\", \"oboe\", \"tuba\", \"bell\", \"fiddle\", \"harpsichord\", \"piano\", \"viola\", \"bongo\", \"flute\", \"horn\", \"saxophone\", \"violin\"", "Y: \"arrow\", \"club\", \"gun\", \"missile\", \"spear\", \"ax\", \"dagger\", \"harpoon\", \"pistol\", \"sword\", \"blade\", \"dynamite\", \"hatchet\", \"rifle\", \"tank\", \"bomb\", \"firearm\", \"knife\", \"shotgun\", \"teargas\", \"cannon\", \"grenade\", \"mace\", \"slingshot\", \"whip\"", "A: \"caress\", \"freedom\", \"health\", \"love\", \"peace\", \"cheer\", \"friend\", \"heaven\", \"loyal\", \"pleasure\", \"diamond\", \"gentle\", \"honest\", \"lucky\", \"rainbow\", \"diploma\", \"gift\", \"honor\", \"miracle\", \"sunrise\", \"family\", \"happy\", \"laughter\", \"paradise\", \"vacation\"", "B: \"abuse\", \"crash\", \"filth\", \"murder\", \"sickness\", \"accident\", \"death\", \"grief\", \"poison\", \"stink\", \"assault\", \"disaster\", \"hatred\", \"pollute\", \"tragedy\", \"divorce\", \"jail\", \"poverty\", \"ugly\", \"cancer\", \"kill\", \"rotten\", \"vomit\", \"agony\", \"prison\"", "Male vs Female / Career vs Family:", "X: \"brother\", \"father\", \"uncle\", \"grandfather\", \"son\", \"he\", \"his\", \"him\", \"man\", \"himself\", \"men\", \"husband\", \"boy\", \"uncle\", \"nephew\", \"boyfriend\", \"king\", \"actor\"", "Y: \"sister\", \"mother\", \"aunt\", \"grandmother\", \"daughter\", \"she\", \"hers\", \"her\", \"woman\", \"herself\", \"women\", \"wife\", \"aunt\", \"niece\", \"girlfriend\", \"queen\", \"actress\"", "A: \"executive\", \"management\", \"professional\", \"corporation\", \"salary\", \"office\", \"business\", \"career\", \"industry\", \"company\", \"promotion\", \"profession\", \"CEO\", \"manager\", \"coworker\", \"entrepreneur\"", "B: \"home\", \"parents\", \"children\", \"family\", \"cousins\", \"marriage\", \"wedding\", \"relatives\", \"grandparents\", \"grandchildren\", \"nurture\", \"child\", \"toddler\", \"infant\", \"teenager\"", "Math vs Art / Male vs Female:", "X: \"math\", \"algebra\", \"geometry\", \"calculus\", \"equations\", \"computation\", \"numbers\", \"addition\", \"trigonometry\", \"arithmetic\", \"logic\", \"proofs\", \"multiplication\", \"mathematics\"", "Y: \"poetry\", \"art\", \"Shakespeare\", \"dance\", \"literature\", \"novel\", \"symphony\", \"drama\", \"orchestra\", \"music\", \"ballet\", \"arts\", \"creative\", \"sculpture\"", "A: \"brother\", \"father\", \"uncle\", \"grandfather\", \"son\", \"he\", \"his\", \"him\", \"man\", \"himself\", \"men\", \"husband\", \"boy\", \"uncle\", \"nephew\", \"boyfriend\", \"king\", \"actor\"", "B: \"sister\", \"mother\", \"aunt\", \"grandmother\", \"daughter\", \"she\", \"hers\", \"her\", \"woman\", \"herself\", \"women\", \"wife\", \"aunt\", \"niece\", \"girlfriend\", \"queen\", \"actress\"", "Science vs Art / Male8 vs Female8:", "X:\"science\", \"technology\", \"physics\", \"chemistry\", \"Einstein\", \"NASA\", \"experiment\", \"astronomy\", \"biology\", \"aeronautics\", \"mechanics\", \"thermodynamics\"", "Y: \"poetry\", \"art\", \"Shakespeare\", \"dance\", \"literature\", \"novel\", \"symphony\", \"drama\", \"orchestra\", \"music\", \"ballet\", \"arts\", \"creative\", \"sculpture\"", "A: \"brother\", \"father\", \"uncle\", \"grandfather\", \"son\", \"he\", \"his\", \"him\", \"man\", \"himself\", \"men\", \"husband\", \"boy\", \"uncle\", \"nephew\", \"boyfriend\"", "B: \"sister\", \"mother\", \"aunt\", \"grandmother\", \"daughter\", \"she\", \"hers\", \"her\", \"woman\", \"herself\", \"women\", \"wife\", \"aunt\", \"niece\", \"girlfriend\"" ] ] } { "question": [ "How is embedding quality assessed?", "What are the three measures of bias which are reduced in experiments?", "What are the probabilistic observations which contribute to the more robust algorithm?" ], "question_id": [ "47726be8641e1b864f17f85db9644ce676861576", "8958465d1eaf81c8b781ba4d764a4f5329f026aa", "31b6544346e9a31d656e197ad01756813ee89422" ], "nlp_background": [ "five", "five", "five" ], "topic_background": [ "familiar", "familiar", "familiar" ], "paper_read": [ "no", "no", "no" ], "search_query": [ "bias", "bias", "bias" ], "question_writer": [ "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [ "We compare this method of bias mitigation with the no bias mitigation (\"Orig\"), geometric bias mitigation (\"Geo\"), the two pieces of our method alone (\"Prob\" and \"KNN\") and the composite method (\"KNN+Prob\"). We note that the composite method performs reasonably well according the the RIPA metric, and much better than traditional geometric bias mitigation according to the neighborhood metric, without significant performance loss according to the accepted benchmarks. To our knowledge this is the first bias mitigation method to perform reasonably both on both metrics." ], "yes_no": null, "free_form_answer": "", "evidence": [ "We evaluate our framework on fastText embeddings trained on Wikipedia (2017), UMBC webbase corpus and statmt.org news dataset (16B tokens) BIBREF11. For simplicity, only the first 22000 words are used in all embeddings, though preliminary results indicate the findings extend to the full corpus. For our novel methods of mitigating bias, a shallow neural network is used to adjust the embedding. The single layer of the model is an embedding layer with weights initialized to those of the original embedding. For the composite method, these weights are initialized to those of the embedding after probabilistic bias mitigation. A batch of word indices is fed into the model, which are then embedded and for which a loss value is calculated, allowing back-propagation to adjust the embeddings. For each of the models, a fixed number of iterations is used to prevent overfitting, which can eventually hurt performance on the embedding benchmarks (See Figure FIGREF12). We evaluated the embedding after 1000 iterations, and stopped training if performance on a benchmark decreased significantly.", "We construct a list of candidate words to debias, taken from the words used in the WEAT gender bias statistics. Words in this list should be gender neutral, and are related to the topics of career, arts, science, math, family and professions (see appendix). We note that this list can easily be expanded to include a greater proportion of words in the corpus. For example, BIBREF4 suggested a method for identifying inappropriately gendered words using unsupervised learning.", "We compare this method of bias mitigation with the no bias mitigation (\"Orig\"), geometric bias mitigation (\"Geo\"), the two pieces of our method alone (\"Prob\" and \"KNN\") and the composite method (\"KNN+Prob\"). We note that the composite method performs reasonably well according the the RIPA metric, and much better than traditional geometric bias mitigation according to the neighborhood metric, without significant performance loss according to the accepted benchmarks. To our knowledge this is the first bias mitigation method to perform reasonably both on both metrics." ], "highlighted_evidence": [ "We evaluate our framework on fastText embeddings trained on Wikipedia (2017), UMBC webbase corpus and statmt.org news dataset (16B tokens) BIBREF11. For simplicity, only the first 22000 words are used in all embeddings, though preliminary results indicate the findings extend to the full corpus. For our novel methods of mitigating bias, a shallow neural network is used to adjust the embedding. The single layer of the model is an embedding layer with weights initialized to those of the original embedding. For the composite method, these weights are initialized to those of the embedding after probabilistic bias mitigation. A batch of word indices is fed into the model, which are then embedded and for which a loss value is calculated, allowing back-propagation to adjust the embeddings. For each of the models, a fixed number of iterations is used to prevent overfitting, which can eventually hurt performance on the embedding benchmarks (See Figure FIGREF12). We evaluated the embedding after 1000 iterations, and stopped training if performance on a benchmark decreased significantly.", "We construct a list of candidate words to debias, taken from the words used in the WEAT gender bias statistics. Words in this list should be gender neutral, and are related to the topics of career, arts, science, math, family and professions (see appendix). We note that this list can easily be expanded to include a greater proportion of words in the corpus. For example, BIBREF4 suggested a method for identifying inappropriately gendered words using unsupervised learning.", "We compare this method of bias mitigation with the no bias mitigation (\"Orig\"), geometric bias mitigation (\"Geo\"), the two pieces of our method alone (\"Prob\" and \"KNN\") and the composite method (\"KNN+Prob\"). We note that the composite method performs reasonably well according the the RIPA metric, and much better than traditional geometric bias mitigation according to the neighborhood metric, without significant performance loss according to the accepted benchmarks. To our knowledge this is the first bias mitigation method to perform reasonably both on both metrics." ] }, { "unanswerable": true, "extractive_spans": [], "yes_no": null, "free_form_answer": "", "evidence": [], "highlighted_evidence": [] } ], "annotation_id": [ "50e0354ccb4d7d6fda33c34e69133daaa8978a2f", "eb66f1f7e89eca5dcf2ae6ef450b1693a43f4e69" ], "worker_id": [ "a0b403873302db7cada39008f04d01155ef68f4f", "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "RIPA, Neighborhood Metric, WEAT", "evidence": [ "Geometric bias mitigation uses the cosine distances between words to both measure and remove gender bias BIBREF0. This method implicitly defines bias as a geometric asymmetry between words when projected onto a subspace, such as the gender subspace constructed from a set of gender pairs such as$\\mathcal {P} = \\lbrace (he,she),(man,woman),(king,queen)...\\rbrace $. The projection of a vector$v$onto$B$(the subspace) is defined by$v_B = \\sum _{j=1}^{k} (v \\cdot b_j) b_j$where a subspace$B$is defined by k orthogonal unit vectors$B = {b_1,...,b_k}$.", "The WEAT statistic BIBREF1 demonstrates the presence of biases in word embeddings with an effect size defined as the mean test statistic across the two word sets:", "Where$s$, the test statistic, is defined as:$s(w,A,B) = mean_{a \\in A} cos(w,a) - mean_{b \\in B} cos(w,a)$, and$X$,$Y$,$A$, and$B$are groups of words for which the association is measured. Possible values range from$-2$to 2 depending on the association of the words groups, and a value of zero indicates$X$and$Y$are equally associated with$A$and$B$. See BIBREF4 for further details on WEAT.", "The RIPA (relational inner product association) metric was developed as an alternative to WEAT, with the critique that WEAT is likely to overestimate the bias of a target attribute BIBREF4. The RIPA metric formalizes the measure of bias used in geometric bias mitigation as the inner product association of a word vector$v$with respect to a relation vector$b$. The relation vector is constructed from the first principal component of the differences between gender word pairs. We report the absolute value of the RIPA metric as the value can be positive or negative according to the direction of the bias. A value of zero indicates a lack of bias, and the value is bound by$[-||w||,||w||]$.", "The neighborhood bias metric proposed by BIBREF5 quantifies bias as the proportion of male socially-biased words among the$k$nearest socially-biased male and female neighboring words, whereby biased words are obtained by projecting neutral words onto a gender relation vector. As we only examine the target word among the 1000 most socially-biased words in the vocabulary (500 male and 500 female), a word’s bias is measured as the ratio of its neighborhood of socially-biased male and socially-biased female words, so that a value of 0.5 in this metric would indicate a perfectly unbiased word, and values closer to 0 and 1 indicate stronger bias.", "FLOAT SELECTED: Table 1: Remaining Bias (as measured by RIPA and Neighborhood metrics) in fastText embeddings for baseline (top two rows) and our (bottom three) methods. Figure 2: Remaining Bias (WEAT score)" ], "highlighted_evidence": [ "Geometric bias mitigation uses the cosine distances between words to both measure and remove gender bias BIBREF0.", "The WEAT statistic BIBREF1 demonstrates the presence of biases in word embeddings with an effect size defined as the mean test statistic across the two word sets:\n\nWhere$s$, the test statistic, is defined as:$s(w,A,B) = mean_{a \\in A} cos(w,a) - mean_{b \\in B} cos(w,a)$, and$X$,$Y$,$A$, and$B$are groups of words for which the association is measured.", "The RIPA (relational inner product association) metric was developed as an alternative to WEAT, with the critique that WEAT is likely to overestimate the bias of a target attribute BIBREF4. ", "The neighborhood bias metric proposed by BIBREF5 quantifies bias as the proportion of male socially-biased words among the$k$nearest socially-biased male and female neighboring words, whereby biased words are obtained by projecting neutral words onto a gender relation vector.", "FLOAT SELECTED: Table 1: Remaining Bias (as measured by RIPA and Neighborhood metrics) in fastText embeddings for baseline (top two rows) and our (bottom three) methods. Figure 2: Remaining Bias (WEAT score)" ] } ], "annotation_id": [ "08a22700ab88c5fb568745e6f7c1b5da25782626" ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "answer": [ { "unanswerable": true, "extractive_spans": [], "yes_no": null, "free_form_answer": "", "evidence": [], "highlighted_evidence": [] } ], "annotation_id": [ "9b4792d66cec53f8ea37bccd5cf7cb9c22290d82" ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] } ] } 1912.02481 Massive vs. Curated Word Embeddings for Low-Resourced Languages. The Case of Yor\ub\'a and Twi The success of several architectures to learn semantic representations from unannotated text and the availability of these kind of texts in online multilingual resources such as Wikipedia has facilitated the massive and automatic creation of resources for multiple languages. The evaluation of such resources is usually done for the high-resourced languages, where one has a smorgasbord of tasks and test sets to evaluate on. For low-resourced languages, the evaluation is more difficult and normally ignored, with the hope that the impressive capability of deep learning architectures to learn (multilingual) representations in the high-resourced setting holds in the low-resourced setting too. In this paper we focus on two African languages, Yor\ub\'a and Twi, and compare the word embeddings obtained in this way, with word embeddings obtained from curated corpora and a language-dependent processing. We analyse the noise in the publicly available corpora, collect high quality and noisy data for the two languages and quantify the improvements that depend not only on the amount of data but on the quality too. We also use different architectures that learn word representations both from surface forms and characters to further exploit all the available information which showed to be important for these languages. For the evaluation, we manually translate the wordsim-353 word pairs dataset from English into Yor\ub\'a and Twi. As output of the work, we provide corpora, embeddings and the test suits for both languages. { "section_name": [ "Introduction", "Related Work", "Languages under Study ::: Yorùbá", "Languages under Study ::: Twi", "Data", "Data ::: Training Corpora", "Data ::: Evaluation Test Sets ::: Yorùbá.", "Data ::: Evaluation Test Sets ::: Twi", "Semantic Representations", "Semantic Representations ::: Word Embeddings Architectures", "Semantic Representations ::: Experiments ::: FastText Training and Evaluation", "Semantic Representations ::: Experiments ::: CWE Training and Evaluation", "Semantic Representations ::: Experiments ::: BERT Evaluation on NER Task", "Summary and Discussion", "Acknowledgements" ], "paragraphs": [ [ "In recent years, word embeddings BIBREF0, BIBREF1, BIBREF2 have been proven to be very useful for training downstream natural language processing (NLP) tasks. Moreover, contextualized embeddings BIBREF3, BIBREF4 have been shown to further improve the performance of NLP tasks such as named entity recognition, question answering, or text classification when used as word features because they are able to resolve ambiguities of word representations when they appear in different contexts. Different deep learning architectures such as multilingual BERT BIBREF4, LASER BIBREF5 and XLM BIBREF6 have proved successful in the multilingual setting. All these architectures learn the semantic representations from unannotated text, making them cheap given the availability of texts in online multilingual resources such as Wikipedia. However, the evaluation of such resources is usually done for the high-resourced languages, where one has a smorgasbord of tasks and test sets to evaluate on. This is the best-case scenario, languages with tones of data for training that generate high-quality models.", "For low-resourced languages, the evaluation is more difficult and therefore normally ignored simply because of the lack of resources. In these cases, training data is scarce, and the assumption that the capability of deep learning architectures to learn (multilingual) representations in the high-resourced setting holds in the low-resourced one does not need to be true. In this work, we focus on two African languages, Yorùbá and Twi, and carry out several experiments to verify this claim. Just by a simple inspection of the word embeddings trained on Wikipedia by fastText, we see a high number of non-Yorùbá or non-Twi words in the vocabularies. For Twi, the vocabulary has only 935 words, and for Yorùbá we estimate that 135 k out of the 150 k words belong to other languages such as English, French and Arabic.", "In order to improve the semantic representations for these languages, we collect online texts and study the influence of the quality and quantity of the data in the final models. We also examine the most appropriate architecture depending on the characteristics of each language. Finally, we translate test sets and annotate corpora to evaluate the performance of both our models together with fastText and BERT pre-trained embeddings which could not be evaluated otherwise for Yorùbá and Twi. The evaluation is carried out in a word similarity and relatedness task using the wordsim-353 test set, and in a named entity recognition (NER) task where embeddings play a crucial role. Of course, the evaluation of the models in only two tasks is not exhaustive but it is an indication of the quality we can obtain for these two low-resourced languages as compared to others such as English where these evaluations are already available.", "The rest of the paper is organized as follows. Related works are reviewed in Section SECREF2 The two languages under study are described in Section SECREF3. We introduce the corpora and test sets in Section SECREF4. The fifth section explores the different training architectures we consider, and the experiments that are carried out. Finally, discussion and concluding remarks are given in Section SECREF6" ], [ "The large amount of freely available text in the internet for multiple languages is facilitating the massive and automatic creation of multilingual resources. The resource par excellence is Wikipedia, an online encyclopedia currently available in 307 languages. Other initiatives such as Common Crawl or the Jehovah’s Witnesses site are also repositories for multilingual data, usually assumed to be noisier than Wikipedia. Word and contextual embeddings have been pre-trained on these data, so that the resources are nowadays at hand for more than 100 languages. Some examples include fastText word embeddings BIBREF2, BIBREF7, MUSE embeddings BIBREF8, BERT multilingual embeddings BIBREF4 and LASER sentence embeddings BIBREF5. In all cases, embeddings are trained either simultaneously for multiple languages, joining high- and low-resource data, or following the same methodology.", "On the other hand, different approaches try to specifically design architectures to learn embeddings in a low-resourced setting. ChaudharyEtAl:2018 follow a transfer learning approach that uses phonemes, lemmas and morphological tags to transfer the knowledge from related high-resource language into the low-resource one. jiangEtal:2018 apply Positive-Unlabeled Learning for word embedding calculations, assuming that unobserved pairs of words in a corpus also convey information, and this is specially important for small corpora.", "In order to assess the quality of word embeddings, word similarity and relatedness tasks are usually used. wordsim-353 BIBREF9 is a collection of 353 pairs annotated with semantic similarity scores in a scale from 0 to 10. Even the problems detected in this dataset BIBREF10, it is widely used by the community. The test set was originally created for English, but the need for comparison with other languages has motivated several translations/adaptations. In hassanMihalcea:2009 the test was translated manually into Spanish, Romanian and Arabic and the scores were adapted to reflect similarities in the new language. The reported correlation between the English scores and the Spanish ones is 0.86. Later, JoubarneInkpen:2011 show indications that the measures of similarity highly correlate across languages. leviantReichart:2015 translated also wordsim-353 into German, Italian and Russian and used crowdsourcing to score the pairs. Finally, jiangEtal:2018 translated with Google Cloud the test set from English into Czech, Danish and Dutch. In our work, native speakers translate wordsim-353 into Yorùbá and Twi, and similarity scores are kept unless the discrepancy with English is big (see Section SECREF11 for details). A similar approach to our work is done for Gujarati in JoshiEtAl:2019." ], [ "is a language in the West Africa with over 50 million speakers. It is spoken among other languages in Nigeria, republic of Togo, Benin Republic, Ghana and Sierra Leon. It is also a language of Òrìsà in Cuba, Brazil, and some Caribbean countries. It is one of the three major languages in Nigeria and it is regarded as the third most spoken native African language. There are different dialects of Yorùbá in Nigeria BIBREF11, BIBREF12, BIBREF13. However, in this paper our focus is the standard Yorùbá based upon a report from the 1974 Joint Consultative Committee on Education BIBREF14.", "Standard Yorùbá has 25 letters without the Latin characters c, q, v, x and z. There are 18 consonants (b, d, f, g, gb, j[dz], k, l, m, n, p[kp], r, s, ṣ, t, w y[j]), 7 oral vowels (a, e, ẹ, i, o, ọ, u), five nasal vowels, (an,$ \\underaccent{\\dot{}}{e}$n, in,$ \\underaccent{\\dot{}}{o}$n, un) and syllabic nasals (m̀, ḿ, ǹ, ń). Yorùbá is a tone language which makes heavy use of lexical tones which are indicated by the use of diacritics. There are three tones in Yorùbá namely low, mid and high which are represented as grave ($\\setminus $), macron ($-$) and acute ($/$) symbols respectively. These tones are applied on vowels and syllabic nasals. Mid tone is usually left unmarked on vowels and every initial or first vowel in a word cannot have a high tone. It is important to note that tone information is needed for correct pronunciation and to have the meaning of a word BIBREF15, BIBREF12, BIBREF14. For example, owó (money), ọw (broom), òwò (business), w (honour), ọw (hand), and w (group) are different words with different dots and diacritic combinations. According to Asahiah2014, Standard Yorùbá uses 4 diacritics, 3 are for marking tones while the fourth which is the dot below is used to indicate the open phonetic variants of letter \"e\" and \"o\" and the long variant of \"s\". Also, there are 19 single diacritic letters, 3 are marked with dots below (ẹ, ọ, ṣ) while the rest are either having the grave or acute accent. The four double diacritics are divided between the grave and the acute accent as well.", "As noted in Asahiah2014, most of the Yorùbá texts found in websites or public domain repositories (i) either use the correct Yorùbá orthography or (ii) replace diacritized characters with un-diacritized ones.", "This happens as a result of many factors, but most especially to the unavailability of appropriate input devices for the accurate application of the diacritical marks BIBREF11. This has led to research on restoration models for diacritics BIBREF16, but the problem is not well solved and we find that most Yorùbá text in the public domain today is not well diacritized. Wikipedia is not an exception." ], [ "is an Akan language of the Central Tano Branch of the Niger Congo family of languages. It is the most widely spoken of the about 80 indigenous languages in Ghana BIBREF17. It has about 9 million native speakers and about a total of 17–18 million Ghanaians have it as either first or second language. There are two mutually intelligible dialects, Asante and Akuapem, and sub-dialectical variants which are mostly unknown to and unnoticed by non-native speakers. It is also mutually intelligible with Fante and to a large extent Bono, another of the Akan languages. It is one of, if not the, easiest to learn to speak of the indigenous Ghanaian languages. The same is however not true when it comes to reading and especially writing. This is due to a number of easily overlooked complexities in the structure of the language. First of all, similarly to Yorùbá, Twi is a tonal language but written without diacritics or accents. As a result, words which are pronounced differently and unambiguous in speech tend to be ambiguous in writing. Besides, most of such words fit interchangeably in the same context and some of them can have more than two meanings. A simple example is:", "Me papa aba nti na me ne wo redi no yie no. S wo ara wo nim s me papa ba a, me suban fofor adi.", "This sentence could be translated as", "(i) I'm only treating you nicely because I'm in a good mood. You already know I'm a completely different person when I'm in a good mood.", "(ii) I'm only treating you nicely because my dad is around. You already know I'm a completely different person when my dad comes around.", "Another characteristic of Twi is the fact that a good number of stop words have the same written form as content words. For instance, “na” or “na” could be the words “and, then”, the phrase “and then” or the word “mother”. This kind of ambiguity has consequences in several natural language applications where stop words are removed from text.", "Finally, we want to point out that words can also be written with or without prefixes. An example is this same na and na which happen to be the same word with an omissible prefix across its multiple senses. For some words, the prefix characters are mostly used when the word begins a sentence and omitted in the middle. This however depends on the author/speaker. For the word embeddings calculation, this implies that one would have different embeddings for the same word found in different contexts." ], [ "We collect clean and noisy corpora for Yorùbá and Twi in order to quantify the effect of noise on the quality of the embeddings, where noisy has a different meaning depending on the language as it will be explained in the next subsections." ], [ "For Yorùbá, we use several corpora collected by the Niger-Volta Language Technologies Institute with texts from different sources, including the Lagos-NWU conversational speech corpus, fully-diacritized Yorùbá language websites and an online Bible. The largest source with clean data is the JW300 corpus. We also created our own small-sized corpus by web-crawling three Yorùbá language websites (Alàkwé, r Yorùbá and Èdè Yorùbá Rẹw in Table TABREF7), some Yoruba Tweets with full diacritics and also news corpora (BBC Yorùbá and VON Yorùbá) with poor diacritics which we use to introduce noise. By noisy corpus, we refer to texts with incorrect diacritics (e.g in BBC Yorùbá), removal of tonal symbols (e.g in VON Yorùbá) and removal of all diacritics/under-dots (e.g some articles in Yorùbá Wikipedia). Furthermore, we got two manually typed fully-diacritized Yorùbá literature (Ìrìnkèrindò nínú igbó elégbèje and Igbó Olódùmarè) both written by Daniel Orowole Olorunfemi Fagunwa a popular Yorùbá author. The number of tokens available from each source, the link to the original source and the quality of the data is summarised in Table TABREF7.", "The gathering of clean data in Twi is more difficult. We use as the base text as it has been shown that the Bible is the most available resource for low and endangered languages BIBREF18. This is the cleanest of all the text we could obtain. In addition, we use the available (and small) Wikipedia dumps which are quite noisy, i.e. Wikipedia contains a good number of English words, spelling errors and Twi sentences formulated in a non-natural way (formulated as L2 speakers would speak Twi as compared to native speakers). Lastly, we added text crawled from jw and the JW300 Twi corpus. Notice that the Bible text, is mainly written in the Asante dialect whilst the last, Jehovah's Witnesses, was written mainly in the Akuapem dialect. The Wikipedia text is a mixture of the two dialects. This introduces a lot of noise into the embeddings as the spelling of most words differs especially at the end of the words due to the mixture of dialects. The JW300 Twi corpus also contains mixed dialects but is mainly Akuampem. In this case, the noise comes also from spelling errors and the uncommon addition of diacritics which are not standardised on certain vowels. Figures for Twi corpora are summarised in the bottom block of Table TABREF7." ], [ "One of the contribution of this work is the introduction of the wordsim-353 word pairs dataset for Yorùbá. All the 353 word pairs were translated from English to Yorùbá by 3 native speakers. The set is composed of 446 unique English words, 348 of which can be expressed as one-word translation in Yorùbá (e.g. book translates to ìwé). In 61 cases (most countries and locations but also other content words) translations are transliterations (e.g. Doctor is dókítà and cucumber kùkúmbà.). 98 words were translated by short phrases instead of single words. This mostly affects words from science and technology (e.g. keyboard translates to pátákó ìtwé —literally meaning typing board—, laboratory translates to ìyàrá ìṣèwádìí —research room—, and ecology translates to ìm nípa àyíká while psychology translates to ìm nípa dá). Finally, 6 terms have the same form in English and Yorùbá therefore they are retained like that in the dataset (e.g. Jazz, Rock and acronyms such as FBI or OPEC).", "We also annotate the Global Voices Yorùbá corpus to test the performance of our trained Yorùbá BERT embeddings on the named entity recognition task. The corpus consists of 25 k tokens which we annotate with four named entity types: DATE, location (LOC), organization (ORG) and personal names (PER). Any other token that does not belong to the four named entities is tagged with \"O\". The dataset is further split into training (70%), development (10%) and test (20%) partitions. Table TABREF12 shows the number of named entities per type and partition." ], [ "Just like Yorùbá, the wordsim-353 word pairs dataset was translated for Twi. Out of the 353 word pairs, 274 were used in this case. The remaining 79 pairs contain words that translate into longer phrases.", "The number of words that can be translated by a single token is higher than for Yorùbá. Within the 274 pairs, there are 351 unique English words which translated to 310 unique Twi words. 298 of the 310 Twi words are single word translations, 4 transliterations and 16 are used as is.", "Even if JoubarneInkpen:2011 showed indications that semantic similarity has a high correlation across languages, different nuances between words are captured differently by languages. For instance, both money and currency in English translate into sika in Twi (and other 32 English words which translate to 14 Twi words belong to this category) and drink in English is translated as Nsa or nom depending on the part of speech (noun for the former, verb for the latter). 17 English words fall into this category. In translating these, we picked the translation that best suits the context (other word in the pair). In two cases, the correlation is not fulfilled at all: soap–opera and star–movies are not related in the Twi language and the score has been modified accordingly." ], [ "In this section, we describe the architectures used for learning word embeddings for the Twi and Yorùbá languages. Also, we discuss the quality of the embeddings as measured by the correlation with human judgements on the translated wordSim-353 test sets and by the F1 score in a NER task." ], [ "Modeling sub-word units has recently become a popular way to address out-of-vocabulary word problem in NLP especially in word representation learning BIBREF19, BIBREF2, BIBREF4. A sub-word unit can be a character, character$n$-grams, or heuristically learned Byte Pair Encodings (BPE) which work very well in practice especially for morphologically rich languages. Here, we consider two word embedding models that make use of character-level information together with word information: Character Word Embedding (CWE) BIBREF20 and fastText BIBREF2. Both of them are extensions of the Word2Vec architectures BIBREF0 that model sub-word units, character embeddings in the case of CWE and character$n$-grams for fastText.", "CWE was introduced in 2015 to model the embeddings of characters jointly with words in order to address the issues of character ambiguities and non-compositional words especially in the Chinese language. A word or character embedding is learned in CWE using either CBOW or skipgram architectures, and then the final word embedding is computed by adding the character embeddings to the word itself:", "where$w_j$is the word embedding of$x_j$,$N_j$is the number of characters in$x_j$, and$c_k$is the embedding of the$k$-th character$c_k$in$x_j$.", "Similarly, in 2017 fastText was introduced as an extension to skipgram in order to take into account morphology and improve the representation of rare words. In this case the embedding of a word also includes the embeddings of its character$n$-grams:", "where$w_j$is the word embedding of$x_j$,$G_j$is the number of character$n$-grams in$x_j$and$g_k$is the embedding of the$k$-th$n$-gram.", "cwe also proposed three alternatives to learn multiple embeddings per character and resolve ambiguities: (i) position-based character embeddings where each character has different embeddings depending on the position it appears in a word, i.e., beginning, middle or end (ii) cluster-based character embeddings where a character can have$K$different cluster embeddings, and (iii) position-based cluster embeddings (CWE-LP) where for each position$K$different embeddings are learned. We use the latter in our experiments with CWE but no positional embeddings are used with fastText.", "Finally, we consider a contextualized embedding architecture, BERT BIBREF4. BERT is a masked language model based on the highly efficient and parallelizable Transformer architecture BIBREF21 known to produce very rich contextualized representations for downstream NLP tasks.", "The architecture is trained by jointly conditioning on both left and right contexts in all the transformer layers using two unsupervised objectives: Masked LM and Next-sentence prediction. The representation of a word is therefore learned according to the context it is found in.", "Training contextual embeddings needs of huge amounts of corpora which are not available for low-resourced languages such as Yorùbá and Twi. However, Google provided pre-trained multilingual embeddings for 102 languages including Yorùbá (but not Twi)." ], [ "As a first experiment, we compare the quality of fastText embeddings trained on (high-quality) curated data and (low-quality) massively extracted data for Twi and Yorùbá languages.", "Facebook released pre-trained word embeddings using fastText for 294 languages trained on Wikipedia BIBREF2 (F1 in tables) and for 157 languages trained on Wikipedia and Common Crawl BIBREF7 (F2). For Yorùbá, both versions are available but only embeddings trained on Wikipedia are available for Twi. We consider these embeddings the result of training on what we call massively-extracted corpora. Notice that training settings for both embeddings are not exactly the same, and differences in performance might come both from corpus size/quality but also from the background model. The 294-languages version is trained using skipgram, in dimension 300, with character$n$-grams of length 5, a window of size 5 and 5 negatives. The 157-languages version is trained using CBOW with position-weights, in dimension 300, with character$n$-grams of length 5, a window of size 5 and 10 negatives.", "We want to compare the performance of these embeddings with the equivalent models that can be obtained by training on the different sources verified by native speakers of Twi and Yorùbá; what we call curated corpora and has been described in Section SECREF4 For the comparison, we define 3 datasets according to the quality and quantity of textual data used for training: (i) Curated Small Dataset (clean), C1, about 1.6 million tokens for Yorùbá and over 735 k tokens for Twi. The clean text for Twi is the Bible and for Yoruba all texts marked under the C1 column in Table TABREF7. (ii) In Curated Small Dataset (clean + noisy), C2, we add noise to the clean corpus (Wikipedia articles for Twi, and BBC Yorùbá news articles for Yorùbá). This increases the number of training tokens for Twi to 742 k tokens and Yorùbá to about 2 million tokens. (iii) Curated Large Dataset, C3 consists of all available texts we are able to crawl and source out for, either clean or noisy. The addition of JW300 BIBREF22 texts increases the vocabulary to more than 10 k tokens in both languages.", "We train our fastText systems using a skipgram model with an embedding size of 300 dimensions, context window size of 5, 10 negatives and$n$-grams ranging from 3 to 6 characters similarly to the pre-trained models for both languages. Best results are obtained with minimum word count of 3.", "Table TABREF15 shows the Spearman correlation between human judgements and cosine similarity scores on the wordSim-353 test set. Notice that pre-trained embeddings on Wikipedia show a very low correlation with humans on the similarity task for both languages ($\\rho $=$0.14$) and their performance is even lower when Common Crawl is also considered ($\\rho $=$0.07$for Yorùbá). An important reason for the low performance is the limited vocabulary. The pre-trained Twi model has only 935 tokens. For Yorùbá, things are apparently better with more than 150 k tokens when both Wikipedia and Common Crawl are used but correlation is even lower. An inspection of the pre-trained embeddings indicates that over 135 k words belong to other languages mostly English, French and Arabic.", "If we focus only on Wikipedia, we see that many texts are without diacritics in Yorùbá and often make use of mixed dialects and English sentences in Twi.", "The Spearman$\\rho $correlation for fastText models on the curated small dataset (clean), C1, improves the baselines by a large margin ($\\rho =0.354$for Twi and 0.322 for Yorùbá) even with a small dataset. The improvement could be justified just by the larger vocabulary in Twi, but in the case of Yorùbá the enhancement is there with almost half of the vocabulary size. We found out that adding some noisy texts (C2 dataset) slightly improves the correlation for Twi language but not for the Yorùbá language. The Twi language benefits from Wikipedia articles because its inclusion doubles the vocabulary and reduces the bias of the model towards religious texts. However, for Yorùbá, noisy texts often ignore diacritics or tonal marks which increases the vocabulary size at the cost of an increment in the ambiguity too. As a result, the correlation is slightly hurt. One would expect that training with more data would improve the quality of the embeddings, but we found out with the results obtained with the C3 dataset, that only high-quality data helps. The addition of JW300 boosts the vocabulary in both cases, but whereas for Twi the corpus mixes dialects and is noisy, for Yorùbá it is very clean and with full diacritics. Consequently, the best embeddings for Yorùbá are obtained when training with the C3 dataset, whereas for Twi, C2 is the best option. In both cases, the curated embeddings improve the correlation with human judgements on the similarity task a$\\Delta \\rho =+0.25$or, equivalently, by an increment on$\\rho $of 170% (Twi) and 180% (Yorùbá)." ], [ "The huge ambiguity in the written Twi language motivates the exploration of different approaches to word embedding estimations. In this work, we compare the standard fastText methodology to include sub-word information with the character-enhanced approach with position-based clustered embeddings (CWE-LP as introduced in Section SECREF17). With the latter, we expect to specifically address the ambiguity present in a language that does not translate the different oral tones on vowels into the written language.", "The character-enhanced word embeddings are trained using a skipgram architecture with cluster-based embeddings and an embedding size of 300 dimensions, context window-size of 5, and 5 negative samples. In this case, the best performance is obtained with a minimum word count of 1, and that increases the effective vocabulary that is used for training the embeddings with respect to the fastText experiments reported in Table TABREF15.", "We repeat the same experiments as with fastText and summarise them in Table TABREF16. If we compare the relative numbers for the three datasets (C1, C2 and C3) we observe the same trends as before: the performance of the embeddings in the similarity task improves with the vocabulary size when the training data can be considered clean, but the performance diminishes when the data is noisy.", "According to the results, CWE is specially beneficial for Twi but not always for Yorùbá. Clean Yorùbá text, does not have the ambiguity issues at character-level, therefore the$n$-gram approximation works better when enough clean data is used ($\\rho ^{C3}_{CWE}=0.354$vs.$\\rho ^{C3}_{fastText}=0.391$) but it does not when too much noisy data (no diacritics, therefore character-level information would be needed) is used ($\\rho ^{C2}_{CWE}=0.345$vs.$\\rho ^{C2}_{fastText}=0.302$). For Twi, the character-level information reinforces the benefits of clean data and the best correlation with human judgements is reached with CWE embeddings ($\\rho ^{C2}_{CWE}=0.437$vs.$\\rho ^{C2}_{fastText}=0.388$)." ], [ "In order to go beyond the similarity task using static word vectors, we also investigate the quality of the multilingual BERT embeddings by fine-tuning a named entity recognition task on the Yorùbá Global Voices corpus.", "One of the major advantages of pre-trained BERT embeddings is that fine-tuning of the model on downstream NLP tasks is typically computationally inexpensive, often with few number of epochs. However, the data the embeddings are trained on has the same limitations as that used in massive word embeddings. Fine-tuning involves replacing the last layer of BERT used optimizing the masked LM with a task-dependent linear classifier or any other deep learning architecture, and training all the model parameters end-to-end. For the NER task, we obtain the token-level representation from BERT and train a linear classifier for sequence tagging.", "Similar to our observations with non-contextualized embeddings, we find out that fine-tuning the pre-trained multilingual-uncased BERT for 4 epochs on the NER task gives an F1 score of 0. If we do the same experiment in English, F1 is 58.1 after 4 epochs.", "That shows how pre-trained embeddings by themselves do not perform well in downstream tasks on low-resource languages. To address this problem for Yorùbá, we fine-tune BERT representations on the Yorùbá corpus in two ways: (i) using the multilingual vocabulary, and (ii) using only Yorùbá vocabulary. In both cases diacritics are ignored to be consistent with the base model training.", "As expected, the fine-tuning of the pre-trained BERT on the Yorùbá corpus in the two configurations generates better representations than the base model. These models are able to achieve a better performance on the NER task with an average F1 score of over 47% (see Table TABREF26 for the comparative). The fine-tuned BERT model with only Yorùbá vocabulary further increases by more than 4% in F1 score obtained with the tuning that uses the multilingual vocabulary. Although we do not have enough data to train BERT from scratch, we observe that fine-tuning BERT on a limited amount of monolingual data of a low-resource language helps to improve the quality of the embeddings. The same observation holds true for high-resource languages like German and French BIBREF23." ], [ "In this paper, we present curated word and contextual embeddings for Yorùbá and Twi. For this purpose, we gather and select corpora and study the most appropriate techniques for the languages. We also create test sets for the evaluation of the word embeddings within a word similarity task (wordsim353) and the contextual embeddings within a NER task. Corpora, embeddings and test sets are available in github.", "In our analysis, we show how massively generated embeddings perform poorly for low-resourced languages as compared to the performance for high-resourced ones. This is due both to the quantity but also the quality of the data used. While the Pearson$\\rho $correlation for English obtained with fastText embeddings trained on Wikipedia (WP) and Common Crawl (CC) are$\\rho _{WP}$=$0.67$and$\\rho _{WP+CC}$=$0.78$, the equivalent ones for Yorùbá are$\\rho _{WP}$=$0.14$and$\\rho _{WP+CC}$=$0.07$. For Twi, only embeddings with Wikipedia are available ($\\rho _{WP}$=$0.14$). By carefully gathering high-quality data and optimising the models to the characteristics of each language, we deliver embeddings with correlations of$\\rho $=$0.39$(Yorùbá) and$\\rho $=$0.44$(Twi) on the same test set, still far from the high-resourced models, but representing an improvement over$170\\%$on the task.", "In a low-resourced setting, the data quality, processing and model selection is more critical than in a high-resourced scenario. We show how the characteristics of a language (such as diacritization in our case) should be taken into account in order to choose the relevant data and model to use. As an example, Twi word embeddings are significantly better when training on 742 k selected tokens than on 16 million noisy tokens, and when using a model that takes into account single character information (CWE-LP) instead of$n$-gram information (fastText).", "Finally, we want to note that, even within a corpus, the quality of the data might depend on the language. Wikipedia is usually used as a high-quality freely available multilingual corpus as compared to noisier data such as Common Crawl. However, for the two languages under study, Wikipedia resulted to have too much noise: interference from other languages, text clearly written by non-native speakers, lack of diacritics and mixture of dialects. The JW300 corpus on the other hand, has been rated as high-quality by our native Yorùbá speakers, but as noisy by our native Twi speakers. In both cases, experiments confirm the conclusions." ], [ "The authors thank Dr. Clement Odoje of the Department of Linguistics and African Languages, University of Ibadan, Nigeria and Olóyè Gbémisóyè Àrdèó for helping us with the Yorùbá translation of the WordSim-353 word pairs and Dr. Felix Y. Adu-Gyamfi and Ps. Isaac Sarfo for helping with the Twi translation. We also thank the members of the Niger-Volta Language Technologies Institute for providing us with clean Yorùbá corpus", "The project on which this paper is based was partially funded by the German Federal Ministry of Education and Research under the funding code 01IW17001 (Deeplee). Responsibility for the content of this publication is with the authors." ] ] } { "question": [ "What turn out to be more important high volume or high quality data?", "How much is model improved by massive data and how much by quality?", "What two architectures are used?" ], "question_id": [ "347e86893e8002024c2d10f618ca98e14689675f", "10091275f777e0c2890c3ac0fd0a7d8e266b57cf", "cbf1137912a47262314c94d36ced3232d5fa1926" ], "nlp_background": [ "zero", "zero", "zero" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar" ], "paper_read": [ "no", "no", "no" ], "search_query": [ "", "", "" ], "question_writer": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [ "only high-quality data helps" ], "yes_no": null, "free_form_answer": "", "evidence": [ "The Spearman$\\rho $correlation for fastText models on the curated small dataset (clean), C1, improves the baselines by a large margin ($\\rho =0.354$for Twi and 0.322 for Yorùbá) even with a small dataset. The improvement could be justified just by the larger vocabulary in Twi, but in the case of Yorùbá the enhancement is there with almost half of the vocabulary size. We found out that adding some noisy texts (C2 dataset) slightly improves the correlation for Twi language but not for the Yorùbá language. The Twi language benefits from Wikipedia articles because its inclusion doubles the vocabulary and reduces the bias of the model towards religious texts. However, for Yorùbá, noisy texts often ignore diacritics or tonal marks which increases the vocabulary size at the cost of an increment in the ambiguity too. As a result, the correlation is slightly hurt. One would expect that training with more data would improve the quality of the embeddings, but we found out with the results obtained with the C3 dataset, that only high-quality data helps. The addition of JW300 boosts the vocabulary in both cases, but whereas for Twi the corpus mixes dialects and is noisy, for Yorùbá it is very clean and with full diacritics. Consequently, the best embeddings for Yorùbá are obtained when training with the C3 dataset, whereas for Twi, C2 is the best option. In both cases, the curated embeddings improve the correlation with human judgements on the similarity task a$\\Delta \\rho =+0.25$or, equivalently, by an increment on$\\rho $of 170% (Twi) and 180% (Yorùbá)." ], "highlighted_evidence": [ "One would expect that training with more data would improve the quality of the embeddings, but we found out with the results obtained with the C3 dataset, that only high-quality data helps." ] }, { "unanswerable": false, "extractive_spans": [ "high-quality" ], "yes_no": null, "free_form_answer": "", "evidence": [ "The Spearman$\\rho $correlation for fastText models on the curated small dataset (clean), C1, improves the baselines by a large margin ($\\rho =0.354$for Twi and 0.322 for Yorùbá) even with a small dataset. The improvement could be justified just by the larger vocabulary in Twi, but in the case of Yorùbá the enhancement is there with almost half of the vocabulary size. We found out that adding some noisy texts (C2 dataset) slightly improves the correlation for Twi language but not for the Yorùbá language. The Twi language benefits from Wikipedia articles because its inclusion doubles the vocabulary and reduces the bias of the model towards religious texts. However, for Yorùbá, noisy texts often ignore diacritics or tonal marks which increases the vocabulary size at the cost of an increment in the ambiguity too. As a result, the correlation is slightly hurt. One would expect that training with more data would improve the quality of the embeddings, but we found out with the results obtained with the C3 dataset, that only high-quality data helps. The addition of JW300 boosts the vocabulary in both cases, but whereas for Twi the corpus mixes dialects and is noisy, for Yorùbá it is very clean and with full diacritics. Consequently, the best embeddings for Yorùbá are obtained when training with the C3 dataset, whereas for Twi, C2 is the best option. In both cases, the curated embeddings improve the correlation with human judgements on the similarity task a$\\Delta \\rho =+0.25$or, equivalently, by an increment on$\\rho $of 170% (Twi) and 180% (Yorùbá)." ], "highlighted_evidence": [ "One would expect that training with more data would improve the quality of the embeddings, but we found out with the results obtained with the C3 dataset, that only high-quality data helps." ] } ], "annotation_id": [ "46dba8cddcfcbf57b2837040db3a5e9a5f7ceaa3", "b5922f00879502196670bb7b26b229547de5fec4" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "answer": [ { "unanswerable": true, "extractive_spans": [], "yes_no": null, "free_form_answer": "", "evidence": [], "highlighted_evidence": [] } ], "annotation_id": [ "863f554da4c30e1548dffc1da53632c9af7f005a" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "fastText", "CWE-LP" ], "yes_no": null, "free_form_answer": "", "evidence": [ "As a first experiment, we compare the quality of fastText embeddings trained on (high-quality) curated data and (low-quality) massively extracted data for Twi and Yorùbá languages.", "The huge ambiguity in the written Twi language motivates the exploration of different approaches to word embedding estimations. In this work, we compare the standard fastText methodology to include sub-word information with the character-enhanced approach with position-based clustered embeddings (CWE-LP as introduced in Section SECREF17). With the latter, we expect to specifically address the ambiguity present in a language that does not translate the different oral tones on vowels into the written language." ], "highlighted_evidence": [ "As a first experiment, we compare the quality of fastText embeddings trained on (high-quality) curated data and (low-quality) massively extracted data for Twi and Yorùbá languages.", "The huge ambiguity in the written Twi language motivates the exploration of different approaches to word embedding estimations. In this work, we compare the standard fastText methodology to include sub-word information with the character-enhanced approach with position-based clustered embeddings (CWE-LP as introduced in Section SECREF17)." ] } ], "annotation_id": [ "0961f0f256f4d2c31bcc6e188931422f79883a5a" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ] } 1810.04528 Is there Gender bias and stereotype in Portuguese Word Embeddings? In this work, we propose an analysis of the presence of gender bias associated with professions in Portuguese word embeddings. The objective of this work is to study gender implications related to stereotyped professions for women and men in the context of the Portuguese language. { "section_name": [ "Introduction", "Related Work", "Portuguese Embedding", "Proposed Approach", "Experiments", "Final Remarks" ], "paragraphs": [ [ "Recently, the transformative potential of machine learning (ML) has propelled ML into the forefront of mainstream media. In Brazil, the use of such technique has been widely diffused gaining more space. Thus, it is used to search for patterns, regularities or even concepts expressed in data sets BIBREF0 , and can be applied as a form of aid in several areas of everyday life.", "Among the different definitions, ML can be seen as the ability to improve performance in accomplishing a task through the experience BIBREF1 . Thus, BIBREF2 presents this as a method of inferences of functions or hypotheses capable of solving a problem algorithmically from data representing instances of the problem. This is an important way to solve different types of problems that permeate computer science and other areas.", "One of the main uses of ML is in text processing, where the analysis of the content the entry point for various learning algorithms. However, the use of this content can represent the insertion of different types of bias in training and may vary with the context worked. This work aims to analyze and remove gender stereotypes from word embedding in Portuguese, analogous to what was done in BIBREF3 for the English language. Hence, we propose to employ a public word2vec model pre-trained to analyze gender bias in the Portuguese language, quantifying biases present in the model so that it is possible to reduce the spreading of sexism of such models. There is also a stage of bias reducing over the results obtained in the model, where it is sought to analyze the effects of the application of gender distinction reduction techniques.", "This paper is organized as follows: Section SECREF2 discusses related works. Section SECREF3 presents the Portuguese word2vec embeddings model used in this paper and Section SECREF4 proposes our method. Section SECREF5 presents experimental results, whose purpose is to verify results of a de-bias algorithm application in Portuguese embeddings word2vec model and a short discussion about it. Section SECREF6 brings our concluding remarks." ], [ "There is a wide range of techniques that provide interesting results in the context of ML algorithms geared to the classification of data without discrimination; these techniques range from the pre-processing of data BIBREF4 to the use of bias removal techniques BIBREF5 in fact. Approaches linked to the data pre-processing step usually consist of methods based on improving the quality of the dataset after which the usual classification tools can be used to train a classifier. So, it starts from a baseline already stipulated by the execution of itself. On the other side of the spectrum, there are Unsupervised and semi-supervised learning techniques, that are attractive because they do not imply the cost of corpus annotation BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 .", "The bias reduction is studied as a way to reduce discrimination through classification through different approaches BIBREF10 BIBREF11 . In BIBREF12 the authors propose to specify, implement, and evaluate the “fairness-aware\" ML interface called themis-ml. In this interface, the main idea is to pick up a data set from a modified dataset. Themis-ml implements two methods for training fairness-aware models. The tool relies on two methods to make agnostic model type predictions: Reject Option Classification and Discrimination-Aware Ensemble Classification, these procedures being used to post-process predictions in a way that reduces potentially discriminatory predictions. According to the authors, it is possible to perceive the potential use of the method as a means of reducing bias in the use of ML algorithms.", "In BIBREF3 , the authors propose a method to hardly reduce bias in English word embeddings collected from Google News. Using word2vec, they performed a geometric analysis of gender direction of the bias contained in the data. Using this property with the generation of gender-neutral analogies, a methodology was provided for modifying an embedding to remove gender stereotypes. Some metrics were defined to quantify both direct and indirect gender biases in embeddings and to develop algorithms to reduce bias in some embedding. Hence, the authors show that embeddings can be used in applications without amplifying gender bias." ], [ "In BIBREF13 , the quality of the representation of words through vectors in several models is discussed. According to the authors, the ability to train high-quality models using simplified architectures is useful in models composed of predictive methods that try to predict neighboring words with one or more context words, such as Word2Vec. Word embeddings have been used to provide meaningful representations for words in an efficient way.", "In BIBREF14 , several word embedding models trained in a large Portuguese corpus are evaluated. Within the Word2Vec model, two training strategies were used. In the first, namely Skip-Gram, the model is given the word and attempts to predict its neighboring words. The second, Continuous Bag-of-Words (CBOW), the model is given the sequence of words without the middle one and attempts to predict this omitted word. The latter was chosen for application in the present proposal.", "The authors of BIBREF14 claim to have collected a large corpus from several sources to obtain a multi-genre corpus representative of the Portuguese language. Hence, it comprehensively covers different expressions of the language, making it possible to analyze gender bias and stereotype in Portuguese word embeddings. The dataset used was tokenized and normalized by the authors to reduce the corpus vocabulary size, under the premise that vocabulary reduction provides more representative vectors." ], [ "Some linguists point out that the female gender is, in Portuguese, a particularization of the masculine. In this way the only gender mark is the feminine, the others being considered without gender (including names considered masculine). In BIBREF15 the gender representation in Portuguese is associated with a set of phenomena, not only from a linguistic perspective but also from a socio-cultural perspective. Since most of the termination of words (e.g., advogada and advogado) are used to indicate to whom the expression refers, stereotypes can be explained through communication. This implies the presence of biases when dealing with terms such as those referring to professions.", "Figure FIGREF1 illustrates the approach proposed in this work. First, using a list of professions relating the identification of female and male who perform it as a parameter, we evaluate the accuracy of similarity generated by the embeddings. Then, getting the biased results, we apply the De-bias algorithm BIBREF3 aiming to reduce sexist analogies previous generated. Thus, all the results are analyzed by comparing the accuracies.", "Using the word2vec model available in a public repository BIBREF14 , the proposal involves the analysis of the most similar analogies generated before and after the application of the BIBREF3 . The work is focused on the analysis of gender bias associated with professions in word embeddings. So therefore into the evaluation of the accuracy of the associations generated, aiming at achieving results as good as possible without prejudicing the evaluation metrics.", "Algorithm SECREF4 describes the method performed during the evaluation of the gender bias presence. In this method we try to evaluate the accuracy of the analogies generated through the model, that is, to verify the cases of association matching generated between the words.", "[!htb] Model Evaluation [1]", "w2v_evaluate INLINEFORM0 open_model( INLINEFORM1 ) count = 0 INLINEFORM2 in INLINEFORM3 read list of tuples x = model.most_similar(positive=[ela', male], negative=[ele'])", "x = female count += 1 accuracy = count/size(profession_pairs) return accuracy" ], [ "The purpose of this section is to perform different analysis concerning bias in word2vec models with Portuguese embeddings. The Continuous Bag-of-Words model used was provided by BIBREF14 (described in Section SECREF3 ). For these experiments, we use a model containing 934966 words of dimension 300 per vector representation. To realize the experiments, a list containing fifty professions labels for female and male was used as the parameter of similarity comparison.", "Using the python library gensim, we evaluate the extreme analogies generated when comparing vectors like: INLINEFORM0 , where INLINEFORM1 represents the item from professions list and INLINEFORM2 the expected association. The most similarity function finds the top-N most similar entities, computing cosine similarity between a simple mean of the projection weight vectors of the given docs. Figure FIGREF4 presents the most extreme analogies results obtained from the model using these comparisons.", "Applying the Algorithm SECREF4 , we check the accuracy obtained with the similarity function before and after the application of the de-bias method. Table TABREF3 presents the corresponding results. In cases like the analogy of garçonete' to stripper' (Figure FIGREF4 , line 8), it is possible to observe that the relationship stipulated between terms with sexual connotation and females is closer than between females and professions. While in the male model, even in cases of non-compliance, the closest analogy remains in the professional environment.", "Using a confidence factor of 99%, when comparing the correctness levels of the model with and without the reduction of bias, the prediction of the model with bias is significantly better. Different authors BIBREF16 BIBREF17 show that the removal of bias in models produces a negative impact on the quality of the model. On the other hand, it is observed that even with a better hit rate the correctness rate in the prediction of related terms is still low." ], [ "This paper presents an analysis of the presence of gender bias in Portuguese word embeddings. Even though it is a work in progress, the proposal showed promising results in analyzing predicting models.", "A possible extension of the work involves deepening the analysis of the results obtained, seeking to achieve higher accuracy rates and fairer models to be used in machine learning techniques. Thus, these studies can involve tests with different methods of pre-processing the data to the use of different models, as well as other factors that may influence the results generated. This deepening is necessary since the model's accuracy is not high.", "To conclude, we believe that the presence of gender bias and stereotypes in the Portuguese language is found in different spheres of language, and it is important to study ways of mitigating different types of discrimination. As such, it can be easily applied to analyze racists bias into the language, such as different types of preconceptions." ] ] } { "question": [ "Does this paper target European or Brazilian Portuguese?", "What were the word embeddings trained on?", "Which word embeddings are analysed?" ], "question_id": [ "519db0922376ce1e87fcdedaa626d665d9f3e8ce", "99a10823623f78dbff9ccecb210f187105a196e9", "09f0dce416a1e40cc6a24a8b42a802747d2c9363" ], "nlp_background": [ "five", "five", "five" ], "topic_background": [ "familiar", "familiar", "familiar" ], "paper_read": [ "no", "no", "no" ], "search_query": [ "bias", "bias", "bias" ], "question_writer": [ "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37" ], "answers": [ { "answer": [ { "unanswerable": true, "extractive_spans": [], "yes_no": null, "free_form_answer": "", "evidence": [], "highlighted_evidence": [] }, { "unanswerable": true, "extractive_spans": [], "yes_no": null, "free_form_answer": "", "evidence": [], "highlighted_evidence": [] } ], "annotation_id": [ "0a93ba2daf6764079c983e70ca8609d6d1d8fa5c", "c6686e4e6090f985be4cc72a08ca2d4948b355bb" ], "worker_id": [ "71f73551e7aabf873649e8fe97aefc54e6dd14f8", "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "large Portuguese corpus" ], "yes_no": null, "free_form_answer": "", "evidence": [ "In BIBREF14 , several word embedding models trained in a large Portuguese corpus are evaluated. Within the Word2Vec model, two training strategies were used. In the first, namely Skip-Gram, the model is given the word and attempts to predict its neighboring words. The second, Continuous Bag-of-Words (CBOW), the model is given the sequence of words without the middle one and attempts to predict this omitted word. The latter was chosen for application in the present proposal.", "Using the word2vec model available in a public repository BIBREF14 , the proposal involves the analysis of the most similar analogies generated before and after the application of the BIBREF3 . The work is focused on the analysis of gender bias associated with professions in word embeddings. So therefore into the evaluation of the accuracy of the associations generated, aiming at achieving results as good as possible without prejudicing the evaluation metrics." ], "highlighted_evidence": [ "In BIBREF14 , several word embedding models trained in a large Portuguese corpus are evaluated. ", "Using the word2vec model available in a public repository BIBREF14 , the proposal involves the analysis of the most similar analogies generated before and after the application of the BIBREF3 . " ] } ], "annotation_id": [ "e0cd186397ec9543e48d25f5944fc9318542f1d5" ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "Continuous Bag-of-Words (CBOW)" ], "yes_no": null, "free_form_answer": "", "evidence": [ "In BIBREF14 , several word embedding models trained in a large Portuguese corpus are evaluated. Within the Word2Vec model, two training strategies were used. In the first, namely Skip-Gram, the model is given the word and attempts to predict its neighboring words. The second, Continuous Bag-of-Words (CBOW), the model is given the sequence of words without the middle one and attempts to predict this omitted word. The latter was chosen for application in the present proposal." ], "highlighted_evidence": [ "The second, Continuous Bag-of-Words (CBOW), the model is given the sequence of words without the middle one and attempts to predict this omitted word. " ] } ], "annotation_id": [ "8b5278bfc35cf0a1b43ceb3418c2c5d20f213a31" ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] } ] } 2002.02224 Citation Data of Czech Apex Courts In this paper, we introduce the citation data of the Czech apex courts (Supreme Court, Supreme Administrative Court and Constitutional Court). This dataset was automatically extracted from the corpus of texts of Czech court decisions - CzCDC 1.0. We obtained the citation data by building the natural language processing pipeline for extraction of the court decision identifiers. The pipeline included the (i) document segmentation model and the (ii) reference recognition model. Furthermore, the dataset was manually processed to achieve high-quality citation data as a base for subsequent qualitative and quantitative analyses. The dataset will be made available to the general public. { "section_name": [ "Introduction", "Related work ::: Legal Citation Analysis", "Related work ::: Reference Recognition", "Related work ::: Data Availability", "Related work ::: Document Segmentation", "Methodology", "Methodology ::: Dataset and models ::: CzCDC 1.0 dataset", "Methodology ::: Dataset and models ::: Reference recognition model", "Methodology ::: Dataset and models ::: Text segmentation model", "Methodology ::: Pipeline", "Results", "Discussion", "Conclusion", "Acknowledgment" ], "paragraphs": [ [ "Analysis of the way court decisions refer to each other provides us with important insights into the decision-making process at courts. This is true both for the common law courts and for their counterparts in the countries belonging to the continental legal system. Citation data can be used for both qualitative and quantitative studies, casting light in the behavior of specific judges through document analysis or allowing complex studies into changing the nature of courts in transforming countries.", "That being said, it is still difficult to create sufficiently large citation datasets to allow a complex research. In the case of the Czech Republic, it was difficult to obtain a relevant dataset of the court decisions of the apex courts (Supreme Court, Supreme Administrative Court and Constitutional Court). Due to its size, it is nearly impossible to extract the references manually. One has to reach out for an automation of such task. However, study of court decisions displayed many different ways that courts use to cite even decisions of their own, not to mention the decisions of other courts.The great diversity in citations led us to the use of means of the natural language processing for the recognition and the extraction of the citation data from court decisions of the Czech apex courts.", "In this paper, we describe the tool ultimately used for the extraction of the references from the court decisions, together with a subsequent way of manual processing of the raw data to achieve a higher-quality dataset. Section SECREF2 maps the related work in the area of legal citation analysis (SectionSECREF1), reference recognition (Section SECREF2), text segmentation (Section SECREF4), and data availability (Section SECREF3). Section SECREF3 describes the method we used for the citation extraction, listing the individual models and the way we have combined these models into the NLP pipeline. Section SECREF4 presents results in the terms of evaluation of the performance of our pipeline, the statistics of the raw data, further manual processing and statistics of the final citation dataset. Section SECREF5 discusses limitations of our work and outlines the possible future development. Section SECREF6 concludes this paper." ], [ "The legal citation analysis is an emerging phenomenon in the field of the legal theory and the legal empirical research.The legal citation analysis employs tools provided by the field of network analysis.", "In spite of the long-term use of the citations in the legal domain (eg. the use of Shepard's Citations since 1873), interest in the network citation analysis increased significantly when Fowler et al. published the two pivotal works on the case law citations by the Supreme Court of the United States BIBREF0, BIBREF1. Authors used the citation data and network analysis to test the hypotheses about the function of stare decisis the doctrine and other issues of legal precedents. In the continental legal system, this work was followed by Winkels and de Ruyter BIBREF2. Authors adopted similar approach to Fowler to the court decisions of the Dutch Supreme Court. Similar methods were later used by Derlén and Lindholm BIBREF3, BIBREF4 and Panagis and Šadl BIBREF5 for the citation data of the Court of Justice of the European Union, and by Olsen and Küçüksu for the citation data of the European Court of Human Rights BIBREF6.", "Additionally, a minor part in research in the legal network analysis resulted in the past in practical tools designed to help lawyers conduct the case law research. Kuppevelt and van Dijck built prototypes employing these techniques in the Netherlands BIBREF7. Görög a Weisz introduced the new legal information retrieval system, Justeus, based on a large database of the legal sources and partly on the network analysis methods. BIBREF8" ], [ "The area of reference recognition already contains a large amount of work. It is concerned with recognizing text spans in documents that are referring to other documents. As such, it is a classical topic within the AI & Law literature.", "The extraction of references from the Italian legislation based on regular expressions was reported by Palmirani et al. BIBREF9. The main goal was to bring references under a set of common standards to ensure the interoperability between different legal information systems.", "De Maat et al. BIBREF10 focused on an automated detection of references to legal acts in Dutch language. Their approach consisted of a grammar covering increasingly complex citation patterns.", "Opijnen BIBREF11 aimed for a reference recognition and a reference standardization using regular expressions accounting for multiple the variant of the same reference and multiple vendor-specific identifiers.", "The language specific work by Kríž et al. BIBREF12 focused on the detecting and classification references to other court decisions and legal acts. Authors used a statistical recognition (HMM and Perceptron algorithms) and reported F1-measure over 90% averaged over all entities. It is the state-of-art in the automatic recognition of references in the Czech court decisions. Unfortunately, it allows only for the detection of docket numbers and it is unable to recognize court-specific or vendor-specific identifiers in the court decisions.", "Other language specific-work includes our previous reference recognition model presented in BIBREF13. Prediction model is based on conditional random fields and it allows recognition of different constituents which then establish both explicit and implicit case-law and doctrinal references. Parts of this model were used in the pipeline described further within this paper in Section SECREF3." ], [ "Large scale quantitative and qualitative studies are often hindered by the unavailability of court data. Access to court decisions is often hindered by different obstacles. In some countries, court decisions are not available at all, while in some other they are accessible only through legal information systems, often proprietary. This effectively restricts the access to court decisions in terms of the bulk data. This issue was already approached by many researchers either through making available selected data for computational linguistics studies or by making available datasets of digitized data for various purposes. Non-exhaustive list of publicly available corpora includes British Law Report Corpus BIBREF14, The Corpus of US Supreme Court Opinions BIBREF15,the HOLJ corpus BIBREF16, the Corpus of Historical English Law Reports, Corpus de Sentencias Penales BIBREF17, Juristisches Referenzkorpus BIBREF18 and many others.", "Language specific work in this area is presented by the publicly available Czech Court Decisions Corpus (CzCDC 1.0) BIBREF19. This corpus contains majority of court decisions of the Czech Supreme Court, the Supreme Administrative Court and the Constitutional Court, hence allowing a large-scale extraction of references to yield representative results. The CzCDC 1.0 was used as a dataset for extraction of the references as is described further within this paper in Section SECREF3. Unfortunately, despite containing 237 723 court decisions issued between 1st January 1993 and 30th September 2018, it is not complete. This fact is reflected in the analysis of the results." ], [ "A large volume of legal information is available in unstructured form, which makes processing these data a challenging task – both for human lawyers and for computers. Schweighofer BIBREF20 called for generic tools allowing a document segmentation to ease the processing of unstructured data by giving them some structure.", "Topic-based segmentation often focuses on the identifying specific sentences that present borderlines of different textual segments.", "The automatic segmentation is not an individual goal – it always serves as a prerequisite for further tasks requiring structured data. Segmentation is required for the text summarization BIBREF21, BIBREF22, keyword extraction BIBREF23, textual information retrieval BIBREF24, and other applications requiring input in the form of structured data.", "Major part of research is focused on semantic similarity methods.The computing similarity between the parts of text presumes that a decrease of similarity means a topical border of two text segments. This approach was introduced by Hearst BIBREF22 and was used by Choi BIBREF25 and Heinonen BIBREF26 as well.", "Another approach takes word frequencies and presumes a border according to different key words extracted. Reynar BIBREF27 authored graphical method based on statistics called dotplotting. Similar techniques were used by Ye BIBREF28 or Saravanan BIBREF29. Bommarito et al. BIBREF30 introduced a Python library combining different features including pre-trained models to the use for automatic legal text segmentation. Li BIBREF31 included neural network into his method to segment Chinese legal texts.", "Šavelka and Ashley BIBREF32 similarly introduced the machine learning based approach for the segmentation of US court decisions texts into seven different parts. Authors reached high success rates in recognizing especially the Introduction and Analysis parts of the decisions.", "Language specific work includes the model presented by Harašta et al. BIBREF33. This work focuses on segmentation of the Czech court decisions into pre-defined topical segments. Parts of this segmentation model were used in the pipeline described further within this paper in Section SECREF3." ], [ "In this paper, we present and describe the citation dataset of the Czech top-tier courts. To obtain this dataset, we have processed the court decisions contained in CzCDC 1.0 dataset by the NLP pipeline consisting of the segmentation model introduced in BIBREF33, and parts of the reference recognition model presented in BIBREF13. The process is described in this section." ], [ "Novotná and Harašta BIBREF19 prepared a dataset of the court decisions of the Czech Supreme Court, the Supreme Administrative Court and the Constitutional Court. The dataset contains 237,723 decisions published between 1st January 1993 and the 30th September 2018. These decisions are organised into three sub-corpora. The sub-corpus of the Supreme Court contains 111,977 decisions, the sub-corpus of the Supreme Administrative Court contains 52,660 decisions and the sub-corpus of the Constitutional Court contains 73,086 decisions. Authors in BIBREF19 assessed that the CzCDC currently contains approximately 91% of all decisions of the Supreme Court, 99,5% of all decisions of the Constitutional Court, and 99,9% of all decisions of the Supreme Administrative Court. As such, it presents the best currently available dataset of the Czech top-tier court decisions." ], [ "Harašta and Šavelka BIBREF13 introduced a reference recognition model trained specifically for the Czech top-tier courts. Moreover, authors made their training data available in the BIBREF34. Given the lack of a single citation standard, references in this work consist of smaller units, because these were identified as more uniform and therefore better suited for the automatic detection. The model was trained using conditional random fields, which is a random field model that is globally conditioned on an observation sequence O. The states of the model correspond to event labels E. Authors used a first-order conditional random fields. Model was trained for each type of the smaller unit independently." ], [ "Harašta et al. BIBREF33, authors introduced the model for the automatic segmentation of the Czech court decisions into pre-defined multi-paragraph parts. These segments include the Header (introduction of given case), History (procedural history prior the apex court proceeding), Submission/Rejoinder (petition of plaintiff and response of defendant), Argumentation (argumentation of the court hearing the case), Footer (legally required information, such as information about further proceedings), Dissent and Footnotes. The model for automatic segmentation of the text was trained using conditional random fields. The model was trained for each type independently." ], [ "In order to obtain the citation data of the Czech apex courts, it was necessary to recognize and extract the references from the CzCDC 1.0. Given that training data for both the reference recognition model BIBREF13, BIBREF34 and the text segmentation model BIBREF33 are publicly available, we were able to conduct extensive error analysis and put together a pipeline to arguably achieve the maximum efficiency in the task. The pipeline described in this part is graphically represented in Figure FIGREF10.", "As the first step, every document in the CzCDC 1.0 was segmented using the text segmentation model. This allowed us to treat different parts of processed court documents differently in the further text processing. Specifically, it allowed us to subject only the specific part of a court decision, in this case the court argumentation, to further the reference recognition and extraction. A textual segment recognised as the court argumentation is then processed further.", "As the second step, parts recognised by the text segmentation model as a court argumentation was processed using the reference recognition model. After carefully studying the evaluation of the model's performance in BIBREF13, we have decided to use only part of the said model. Specifically, we have employed the recognition of the court identifiers, as we consider the rest of the smaller units introduced by Harašta and Šavelka of a lesser value for our task. Also, deploying only the recognition of the court identifiers allowed us to avoid the problematic parsing of smaller textual units into the references. The text spans recognised as identifiers of court decisions are then processed further.", "At this point, it is necessary to evaluate the performance of the above mentioned part of the pipeline before proceeding further. The evaluation of the performance is summarised in Table TABREF11. It shows that organising the two models into the pipeline boosted the performance of the reference recognition model, leading to a higher F1 measure in the initial recognition of the text spans and their classification.", "Further processing included:", "control and repair of incompletely identified court identifiers (manual);", "identification and sorting of identifiers as belonging to Supreme Court, Supreme Administrative Court or Constitutional Court (rule-based, manual);", "standardisation of different types of court identifiers (rule-based, manual);", "parsing of identifiers with court decisions available in CzCDC 1.0." ], [ "Overall, through the process described in Section SECREF3, we have retrieved three datasets of extracted references - one dataset per each of the apex courts. These datasets consist of the individual pairs containing the identification of the decision from which the reference was retrieved, and the identification of the referred documents. As we only extracted references to other judicial decisions, we obtained 471,319 references from Supreme Court decisions, 167,237 references from Supreme Administrative Court decisions and 264,463 references from Constitutional Court Decisions. These are numbers of text spans identified as references prior the further processing described in Section SECREF3.", "These references include all identifiers extracted from the court decisions contained in the CzCDC 1.0. Therefore, this number includes all other court decisions, including lower courts, the Court of Justice of the European Union, the European Court of Human Rights, decisions of other public authorities etc. Therefore, it was necessary to classify these into references referring to decisions of the Supreme Court, Supreme Administrative Court, Constitutional Court and others. These groups then underwent a standardisation - or more precisely a resolution - of different court identifiers used by the Czech courts. Numbers of the references resulting from this step are shown in Table TABREF16.", "Following this step, we linked court identifiers with court decisions contained in the CzCDC 1.0. Given that, the CzCDC 1.0 does not contain all the decisions of the respective courts, we were not able to parse all the references. Numbers of the references resulting from this step are shown in Table TABREF17." ], [ "This paper introduced the first dataset of citation data of the three Czech apex courts. Understandably, there are some pitfalls and limitations to our approach.", "As we admitted in the evaluation in Section SECREF9, the models we included in our NLP pipelines are far from perfect. Overall, we were able to achieve a reasonable recall and precision rate, which was further enhanced by several round of manual processing of the resulting data. However, it is safe to say that we did not manage to extract all the references. Similarly, because the CzCDC 1.0 dataset we used does not contain all the decisions of the respective courts, we were not able to parse all court identifiers to the documents these refer to. Therefore, the future work in this area may include further development of the resources we used. The CzCDC 1.0 would benefit from the inclusion of more documents of the Supreme Court, the reference recognition model would benefit from more refined training methods etc.", "That being said, the presented dataset is currently the only available resource of its kind focusing on the Czech court decisions that is freely available to research teams. This significantly reduces the costs necessary to conduct these types of studies involving network analysis, and the similar techniques requiring a large amount of citation data." ], [ "In this paper, we have described the process of the creation of the first dataset of citation data of the three Czech apex courts. The dataset is publicly available for download at https://github.com/czech-case-law-relevance/czech-court-citations-dataset." ], [ "J.H., and T.N. gratefully acknowledge the support from the Czech Science Foundation under grant no. GA-17-20645S. T.N. also acknowledges the institutional support of the Masaryk University. This paper was presented at CEILI Workshop on Legal Data Analysis held in conjunction with Jurix 2019 in Madrid, Spain." ] ] } { "question": [ "Did they experiment on this dataset?", "How is quality of the citation measured?", "How big is the dataset?" ], "question_id": [ "ac706631f2b3fa39bf173cd62480072601e44f66", "8b71ede8170162883f785040e8628a97fc6b5bcb", "fa2a384a23f5d0fe114ef6a39dced139bddac20e" ], "nlp_background": [ "two", "two", "two" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar" ], "paper_read": [ "no", "no", "no" ], "search_query": [ "", "", "" ], "question_writer": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": false, "free_form_answer": "", "evidence": [], "highlighted_evidence": [] }, { "unanswerable": false, "extractive_spans": [], "yes_no": true, "free_form_answer": "", "evidence": [ "In order to obtain the citation data of the Czech apex courts, it was necessary to recognize and extract the references from the CzCDC 1.0. Given that training data for both the reference recognition model BIBREF13, BIBREF34 and the text segmentation model BIBREF33 are publicly available, we were able to conduct extensive error analysis and put together a pipeline to arguably achieve the maximum efficiency in the task. The pipeline described in this part is graphically represented in Figure FIGREF10.", "At this point, it is necessary to evaluate the performance of the above mentioned part of the pipeline before proceeding further. The evaluation of the performance is summarised in Table TABREF11. It shows that organising the two models into the pipeline boosted the performance of the reference recognition model, leading to a higher F1 measure in the initial recognition of the text spans and their classification." ], "highlighted_evidence": [ "In order to obtain the citation data of the Czech apex courts, it was necessary to recognize and extract the references from the CzCDC 1.0. Given that training data for both the reference recognition model BIBREF13, BIBREF34 and the text segmentation model BIBREF33 are publicly available, we were able to conduct extensive error analysis and put together a pipeline to arguably achieve the maximum efficiency in the task. The pipeline described in this part is graphically represented in Figure FIGREF10.", "At this point, it is necessary to evaluate the performance of the above mentioned part of the pipeline before proceeding further. The evaluation of the performance is summarised in Table TABREF11. It shows that organising the two models into the pipeline boosted the performance of the reference recognition model, leading to a higher F1 measure in the initial recognition of the text spans and their classification." ] } ], "annotation_id": [ "3bf5c275ced328b66fd9a07b30a4155fa476d779", "ae80f5c5b782ad02d1dde21b7384bc63472f5796" ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4", "a0b403873302db7cada39008f04d01155ef68f4f" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "it is necessary to evaluate the performance of the above mentioned part of the pipeline before proceeding further. The evaluation of the performance is summarised in Table TABREF11. It shows that organising the two models into the pipeline boosted the performance of the reference recognition model, leading to a higher F1 measure in the initial recognition of the text spans and their classification." ], "yes_no": null, "free_form_answer": "", "evidence": [ "In order to obtain the citation data of the Czech apex courts, it was necessary to recognize and extract the references from the CzCDC 1.0. Given that training data for both the reference recognition model BIBREF13, BIBREF34 and the text segmentation model BIBREF33 are publicly available, we were able to conduct extensive error analysis and put together a pipeline to arguably achieve the maximum efficiency in the task. The pipeline described in this part is graphically represented in Figure FIGREF10.", "At this point, it is necessary to evaluate the performance of the above mentioned part of the pipeline before proceeding further. The evaluation of the performance is summarised in Table TABREF11. It shows that organising the two models into the pipeline boosted the performance of the reference recognition model, leading to a higher F1 measure in the initial recognition of the text spans and their classification." ], "highlighted_evidence": [ "In order to obtain the citation data of the Czech apex courts, it was necessary to recognize and extract the references from the CzCDC 1.0. Given that training data for both the reference recognition model BIBREF13, BIBREF34 and the text segmentation model BIBREF33 are publicly available, we were able to conduct extensive error analysis and put together a pipeline to arguably achieve the maximum efficiency in the task. The pipeline described in this part is graphically represented in Figure FIGREF10.", "At this point, it is necessary to evaluate the performance of the above mentioned part of the pipeline before proceeding further. The evaluation of the performance is summarised in Table TABREF11. It shows that organising the two models into the pipeline boosted the performance of the reference recognition model, leading to a higher F1 measure in the initial recognition of the text spans and their classification." ] } ], "annotation_id": [ "ca22977516b8d2f165904d7e9742421ad8d742e2" ], "worker_id": [ "a0b403873302db7cada39008f04d01155ef68f4f" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "903019 references", "evidence": [ "Overall, through the process described in Section SECREF3, we have retrieved three datasets of extracted references - one dataset per each of the apex courts. These datasets consist of the individual pairs containing the identification of the decision from which the reference was retrieved, and the identification of the referred documents. As we only extracted references to other judicial decisions, we obtained 471,319 references from Supreme Court decisions, 167,237 references from Supreme Administrative Court decisions and 264,463 references from Constitutional Court Decisions. These are numbers of text spans identified as references prior the further processing described in Section SECREF3." ], "highlighted_evidence": [ "As we only extracted references to other judicial decisions, we obtained 471,319 references from Supreme Court decisions, 167,237 references from Supreme Administrative Court decisions and 264,463 references from Constitutional Court Decisions. These are numbers of text spans identified as references prior the further processing described in Section SECREF3." ] } ], "annotation_id": [ "0bdc7f448e47059d71a0ad3c075303900370856a" ], "worker_id": [ "a0b403873302db7cada39008f04d01155ef68f4f" ] } ] } 2003.07433 LAXARY: A Trustworthy Explainable Twitter Analysis Model for Post-Traumatic Stress Disorder Assessment Veteran mental health is a significant national problem as large number of veterans are returning from the recent war in Iraq and continued military presence in Afghanistan. While significant existing works have investigated twitter posts-based Post Traumatic Stress Disorder (PTSD) assessment using blackbox machine learning techniques, these frameworks cannot be trusted by the clinicians due to the lack of clinical explainability. To obtain the trust of clinicians, we explore the big question, can twitter posts provide enough information to fill up clinical PTSD assessment surveys that have been traditionally trusted by clinicians? To answer the above question, we propose, LAXARY (Linguistic Analysis-based Exaplainable Inquiry) model, a novel Explainable Artificial Intelligent (XAI) model to detect and represent PTSD assessment of twitter users using a modified Linguistic Inquiry and Word Count (LIWC) analysis. First, we employ clinically validated survey tools for collecting clinical PTSD assessment data from real twitter users and develop a PTSD Linguistic Dictionary using the PTSD assessment survey results. Then, we use the PTSD Linguistic Dictionary along with machine learning model to fill up the survey tools towards detecting PTSD status and its intensity of corresponding twitter users. Our experimental evaluation on 210 clinically validated veteran twitter users provides promising accuracies of both PTSD classification and its intensity estimation. We also evaluate our developed PTSD Linguistic Dictionary's reliability and validity. { "section_name": [ "Introduction", "Overview", "Related Works", "Demographics of Clinically Validated PTSD Assessment Tools", "Twitter-based PTSD Detection", "Twitter-based PTSD Detection ::: Data Collection", "Twitter-based PTSD Detection ::: Pre-processing", "Twitter-based PTSD Detection ::: PTSD Detection Baseline Model", "LAXARY: Explainable PTSD Detection Model", "LAXARY: Explainable PTSD Detection Model ::: PTSD Linguistic Dictionary Creation", "LAXARY: Explainable PTSD Detection Model ::: Psychometric Validation of PTSD Linguistic Dictionary", "LAXARY: Explainable PTSD Detection Model ::: Feature Extraction and Survey Score Estimation", "Experimental Evaluation", "Experimental Evaluation ::: Results", "Challenges and Future Work", "Conclusion" ], "paragraphs": [ [ "Combat veterans diagnosed with PTSD are substantially more likely to engage in a number of high risk activities including engaging in interpersonal violence, attempting suicide, committing suicide, binge drinking, and drug abuse BIBREF0. Despite improved diagnostic screening, outpatient mental health and inpatient treatment for PTSD, the syndrome remains treatment resistant, is typically chronic, and is associated with numerous negative health effects and higher treatment costs BIBREF1. As a result, the Veteran Administration's National Center for PTSD (NCPTSD) suggests to reconceptualize PTSD not just in terms of a psychiatric symptom cluster, but focusing instead on the specific high risk behaviors associated with it, as these may be directly addressed though behavioral change efforts BIBREF0. Consensus prevalence estimates suggest that PTSD impacts between 15-20% of the veteran population which is typically chronic and treatment resistant BIBREF0. The PTSD patients support programs organized by different veterans peer support organization use a set of surveys for local weekly assessment to detect the intensity of PTSD among the returning veterans. However, recent advanced evidence-based care for PTSD sufferers surveys have showed that veterans, suffered with chronic PTSD are reluctant in participating assessments to the professionals which is another significant symptom of war returning veterans with PTSD. Several existing researches showed that, twitter posts of war veterans could be a significant indicator of their mental health and could be utilized to predict PTSD sufferers in time before going out of control BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8. However, all of the proposed methods relied on either blackbox machine learning methods or language models based sentiments extraction of posted texts which failed to obtain acceptability and trust of clinicians due to the lack of their explainability.", "In the context of the above research problem, we aim to answer the following research questions", "Given clinicians have trust on clinically validated PTSD assessment surveys, can we fill out PTSD assessment surveys using twitter posts analysis of war-veterans?", "If possible, what sort of analysis and approach are needed to develop such XAI model to detect the prevalence and intensity of PTSD among war-veterans only using the social media (twitter) analysis where users are free to share their everyday mental and social conditions?", "How much quantitative improvement do we observe in our model's ability to explain both detection and intensity estimation of PTSD?", "In this paper, we propose LAXARY, an explainable and trustworthy representation of PTSD classification and its intensity for clinicians.", "The key contributions of our work are summarized below,", "The novelty of LAXARY lies on the proposed clinical surveys-based PTSD Linguistic dictionary creation with words/aspects which represents the instantaneous perturbation of twitter-based sentiments as a specific pattern and help calculate the possible scores of each survey question.", "LAXARY includes a modified LIWC model to calculate the possible scores of each survey question using PTSD Linguistic Dictionary to fill out the PTSD assessment surveys which provides a practical way not only to determine fine-grained discrimination of physiological and psychological health markers of PTSD without incurring the expensive and laborious in-situ laboratory testing or surveys, but also obtain trusts of clinicians who are expected to see traditional survey results of the PTSD assessment.", "Finally, we evaluate the accuracy of LAXARY model performance and reliability-validity of generated PTSD Linguistic Dictionary using real twitter users' posts. Our results show that, given normal weekly messages posted in twitter, LAXARY can provide very high accuracy in filling up surveys towards identifying PTSD ($\\approx 96\\%$) and its intensity ($\\approx 1.2$mean squared error)." ], [ "Fig. FIGREF7 shows a schematic representation of our proposed model. It consists of the following logical steps: (i) Develop PTSD Detection System using twitter posts of war-veterans(ii) design real surveys from the popular symptoms based mental disease assessment surveys; (iii) define single category and create PTSD Linguistic Dictionary for each survey question and multiple aspect/words for each question; (iv) calculate$\\alpha $-scores for each category and dimension based on linguistic inquiry and word count as well as the aspects/words based dictionary; (v) calculate scaling scores ($s$-scores) for each dimension based on the$\\alpha $-scores and$s$-scores of each category based on the$s$-scores of its dimensions; (vi) rank features according to the contributions of achieving separation among categories associated with different$\\alpha $-scores and$s$-scores; and select feature sets that minimize the overlap among categories as associated with the target classifier (SGD); and finally (vii) estimate the quality of selected features-based classification for filling up surveys based on classified categories i.e. PTSD assessment which is trustworthy among the psychiatry community." ], [ "Twitter activity based mental health assessment has been utmost importance to the Natural Language Processing (NLP) researchers and social media analysts for decades. Several studies have turned to social media data to study mental health, since it provides an unbiased collection of a person's language and behavior, which has been shown to be useful in diagnosing conditions. BIBREF9 used n-gram language model (CLM) based s-score measure setting up some user centric emotional word sets. BIBREF10 used positive and negative PTSD data to train three classifiers: (i) one unigram language model (ULM); (ii) one character n-gram language model (CLM); and 3) one from the LIWC categories$\\alpha $-scores and found that last one gives more accuracy than other ones. BIBREF11 used two types of$s$-scores taking the ratio of negative and positive language models. Differences in language use have been observed in the personal writing of students who score highly on depression scales BIBREF2, forum posts for depression BIBREF3, self narratives for PTSD (BIBREF4, BIBREF5), and chat rooms for bipolar BIBREF6. Specifically in social media, differences have previously been observed between depressed and control groups (as assessed by internet-administered batteries) via LIWC: depressed users more frequently use first person pronouns (BIBREF7) and more frequently use negative emotion words and anger words on Twitter, but show no differences in positive emotion word usage (BIBREF8). Similarly, an increase in negative emotion and first person pronouns, and a decrease in third person pronouns, (via LIWC) is observed, as well as many manifestations of literature findings in the pattern of life of depressed users (e.g., social engagement, demographics) (BIBREF12). Differences in language use in social media via LIWC have also been observed between PTSD and control groups (BIBREF13).", "All of the prior works used some random dictionary related to the human sentiment (positive/negative) word sets as category words to estimate the mental health but very few of them addressed the problem of explainability of their solution to obtain trust of clinicians. Islam et. al proposed an explainable topic modeling framework to rank different mental health features using Local Interpretable Model-Agnostic Explanations and visualize them to understand the features involved in mental health status classification using the BIBREF14 which fails to provide trust of clinicians due to its lack of interpretability in clinical terms. In this paper, we develop LAXARY model where first we start investigating clinically validated survey tools which are trustworthy methods of PTSD assessment among clinicians, build our category sets based on the survey questions and use these as dictionary words in terms of first person singular number pronouns aspect for next level LIWC algorithm. Finally, we develop a modified LIWC algorithm to estimate survey scores (similar to sentiment category scores of naive LIWC) which is both explainable and trustworthy to clinicians." ], [ "There are many clinically validated PTSD assessment tools that are being used both to detect the prevalence of PTSD and its intensity among sufferers. Among all of the tools, the most popular and well accepted one is Domain-Specific Risk-Taking (DOSPERT) Scale BIBREF15. This is a psychometric scale that assesses risk taking in five content domains: financial decisions (separately for investing versus gambling), health/safety, recreational, ethical, and social decisions. Respondents rate the likelihood that they would engage in domain-specific risky activities (Part I). An optional Part II assesses respondents' perceptions of the magnitude of the risks and expected benefits of the activities judged in Part I. There are more scales that are used in risky behavior analysis of individual's daily activities such as, The Berlin Social Support Scales (BSSS) BIBREF16 and Values In Action Scale (VIAS) BIBREF17. Dryhootch America BIBREF18, BIBREF19, a veteran peer support community organization, chooses 5, 6 and 5 questions respectively from the above mentioned survey systems to assess the PTSD among war veterans and consider rest of them as irrelevant to PTSD. The details of dryhootch chosen survey scale are stated in Table TABREF13. Table!TABREF14 shows a sample DOSPERT scale demographic chosen by dryhootch. The threshold (in Table TABREF13) is used to calculate the risky behavior limits. For example, if one individual's weekly DOSPERT score goes over 28, he is in critical situation in terms of risk taking symptoms of PTSD. Dryhootch defines the intensity of PTSD into four categories based on the weekly survey results of all three clinical survey tools (DOSPERT, BSSS and VIAS )", "High risk PTSD: If one individual veteran's weekly PTSD assessment scores go above the threshold for all three PTSD assessment tools i.e. DOSPERT, BSSS and VIAS, then he/she is in high risk situation which needs immediate mental support to avoid catastrophic effect of individual's health or surrounding people's life.", "Moderate risk PTSD: If one individual veteran's weekly PTSD assessment scores go above the threshold for any two of the three PTSD assessment tools, then he/she is in moderate risk situation which needs close observation and peer mentoring to avoid their risk progression.", "Low risk PTSD: If one individual veteran's weekly PTSD assessment scores go above the threshold for any one of the three PTSD assessment tools, then he/she has light symptoms of PTSD.", "No PTSD: If one individual veteran's weekly PTSD assessment scores go below the threshold for all three PTSD assessment tools, then he/she has no PTSD." ], [ "To develop an explainable model, we first need to develop twitter-based PTSD detection algorithm. In this section, we describe the data collection and the development of our core LAXARY model." ], [ "We use an automated regular expression based searching to find potential veterans with PTSD in twitter, and then refine the list manually. First, we select different keywords to search twitter users of different categories. For example, to search self-claimed diagnosed PTSD sufferers, we select keywords related to PTSD for example, post trauma, post traumatic disorder, PTSD etc. We use a regular expression to search for statements where the user self-identifies as being diagnosed with PTSD. For example, Table TABREF27 shows a self-identified tweet posts. To search veterans, we mostly visit to different twitter accounts of veterans organizations such as \"MA Women Veterans @WomenVeterans\", \"Illinois Veterans @ILVetsAffairs\", \"Veterans Benefits @VAVetBenefits\" etc. We define an inclusion criteria as follows: one twitter user will be part of this study if he/she describes himself/herself as a veteran in the introduction and have at least 25 tweets in last week. After choosing the initial twitter users, we search for self-identified PTSD sufferers who claim to be diagnosed with PTSD in their twitter posts. We find 685 matching tweets which are manually reviewed to determine if they indicate a genuine statement of a diagnosis for PTSD. Next, we select the username that authored each of these tweets and retrieve last week's tweets via the Twitter API. We then filtered out users with less than 25 tweets and those whose tweets were not at least 75% in English (measured using an automated language ID system.) This filtering left us with 305 users as positive examples. We repeated this process for a group of randomly selected users. We randomly selected 3,000 twitter users who are veterans as per their introduction and have at least 25 tweets in last one week. After filtering (as above) in total 2,423 users remain, whose tweets are used as negative examples developing a 2,728 user's entire weeks' twitter posts where 305 users are self-claimed PTSD sufferers. We distributed Dryhootch chosen surveys among 1,200 users (305 users are self claimed PTSD sufferers and rest of them are randomly chosen from previous 2,423 users) and received 210 successful responses. Among these responses, 92 users were diagnosed as PTSD by any of the three surveys and rest of the 118 users are diagnosed with NO PTSD. Among the clinically diagnosed PTSD sufferers, 17 of them were not self-identified before. However, 7 of the self-identified PTSD sufferers are assessed with no PTSD by PTSD assessment tools. The response rates of PTSD and NO PTSD users are 27% and 12%. In summary, we have collected one week of tweets from 2,728 veterans where 305 users claimed to have diagnosed with PTSD. After distributing Dryhootch surveys, we have a dataset of 210 veteran twitter users among them 92 users are assessed with PTSD and 118 users are diagnosed with no PTSD using clinically validated surveys. The severity of the PTSD are estimated as Non-existent, light, moderate and high PTSD based on how many surveys support the existence of PTSD among the participants according to dryhootch manual BIBREF18, BIBREF19." ], [ "We download 210 users' all twitter posts who are war veterans and clinically diagnosed with PTSD sufferers as well which resulted a total 12,385 tweets. Fig FIGREF16 shows each of the 210 veteran twitter users' monthly average tweets. We categorize these Tweets into two groups: Tweets related to work and Tweets not related to work. That is, only the Tweets that use a form of the word “work*” (e.g. work,worked, working, worker, etc.) or “job*” (e.g. job, jobs, jobless, etc.) are identified as work-related Tweets, with the remaining categorized as non-work-related Tweets. This categorization method increases the likelihood that most Tweets in the work group are indeed talking about work or job; for instance, “Back to work. Projects are firing back up and moving ahead now that baseball is done.” This categorization results in 456 work-related Tweets, about 5.4% of all Tweets written in English (and 75 unique Twitter users). To conduct weekly-level analysis, we consider three categorizations of Tweets (i.e. overall Tweets, work-related Tweets, and non work-related Tweets) on a daily basis, and create a text file for each week for each group." ], [ "We use Coppersmith proposed PTSD classification algorithm to develop our baseline blackbox model BIBREF11. We utilize our positive and negative PTSD data (+92,-118) to train three classifiers: (i) unigram language model (ULM) examining individual whole words, (ii) character n-gram language model (CLM), and (iii) LIWC based categorical models above all of the prior ones. The LMs have been shown effective for Twitter classification tasks BIBREF9 and LIWC has been previously used for analysis of mental health in Twitter BIBREF10. The language models measure the probability that a word (ULM) or a string of characters (CLM) was generated by the same underlying process as the training data. We first train one of each language model ($clm^{+}$and$ulm^{+}$) from the tweets of PTSD users, and another model ($clm^{-}$and$ulm^{-}$) from the tweets from No PTSD users. Each test tweet$t$is scored by comparing probabilities from each LM called$s-score$", "A threshold of 1 for$s-score$divides scores into positive and negative classes. In a multi-class setting, the algorithm minimizes the cross entropy, selecting the model with the highest probability. For each user, we calculate the proportion of tweets scored positively by each LIWC category. These proportions are used as a feature vector in a loglinear regression model BIBREF20. Prior to training, we preprocess the text of each tweet: we replace all usernames with a single token (USER), lowercase all text, and remove extraneous whitespace. We also exclude any tweet that contained a URL, as these often pertain to events external to the user.", "We conduct a LIWC analysis of the PTSD and non-PTSD tweets to determine if there are differences in the language usage of PTSD users. We applied the LIWC battery and examined the distribution of words in their language. Each tweet was tokenized by separating on whitespace. For each user, for a subset of the LIWC categories, we measured the proportion of tweets that contained at least one word from that category. Specifically, we examined the following nine categories: first, second and third person pronouns, swear, anger, positive emotion, negative emotion, death, and anxiety words. Second person pronouns were used significantly less often by PTSD users, while third person pronouns and words about anxiety were used significantly more often." ], [ "The heart of LAXARY framework is the construction of PTSD Linguistic Dictionary. Prior works show that linguistic dictionary based text analysis has been much effective in twitter based sentiment analysis BIBREF21, BIBREF22. Our work is the first of its kind that develops its own linguistic dictionary to explain automatic PTSD assessment to confirm trustworthiness to clinicians." ], [ "We use LIWC developed WordStat dictionary format for our text analysis BIBREF23. The LIWC application relies on an internal default dictionary that defines which words should be counted in the target text files. To avoid confusion in the subsequent discussion, text words that are read and analyzed by WordStat are referred to as target words. Words in the WordStat dictionary file will be referred to as dictionary words. Groups of dictionary words that tap a particular domain (e.g., negative emotion words) are variously referred to as subdictionaries or word categories. Fig FIGREF8 is a sample WordStat dictionary. There are several steps to use this dictionary which are stated as follows:", "Pronoun selection: At first we have to define the pronouns of the target sentiment. Here we used first person singular number pronouns (i.e., I, me, mine etc.) that means we only count those sentences or segments which are only related to first person singular number i.e., related to the person himself.", "Category selection: We have to define the categories of each word set thus we can analyze the categories as well as dimensions' text analysis scores. We chose three categories based on the three different surveys: 1) DOSPERT scale; 2) BSSS scale; and 3) VIAS scale.", "Dimension selection: We have to define the word sets (also called dimension) for each category. We chose one dimension for each of the questions under each category to reflect real survey system evaluation. Our chosen categories are state in Fig FIGREF20.", "Score calculation$\\alpha $-score:$\\alpha $-scores refer to the Cronbach's alphas for the internal reliability of the specific words within each category. The binary alphas are computed on the ratio of occurrence and non-occurrence of each dictionary word whereas the raw or uncorrected alphas are based on the percentage of use of each of the category words within texts." ], [ "After the PTSD Linguistic Dictionary has been created, we empirically evaluate its psychometric properties such as reliability and validity as per American Standards for educational and psychological testing guideline BIBREF24. In psychometrics, reliability is most commonly evaluated by Cronbach's alpha, which assesses internal consistency based on inter-correlations and the number of measured items. In the text analysis scenario, each word in our PTSD Linguistic dictionary is considered an item, and reliability is calculated based on each text file's response to each word item, which forms an$N$(number of text files)$\\times J$(number of words or stems in a dictionary) data matrix. There are two ways to quantify such responses: using percentage data (uncorrected method), or using \"present or not\" data (binary method) BIBREF23. For the uncorrected method, the data matrix comprises percentage values of each word/stem are calculated from each text file. For the binary method, the data matrix quantifies whether or not a word was used in a text file where \"1\" represents yes and \"0\" represents no. Once the data matrix is created, it is used to calculate Cronbach's alpha based on its inter-correlation matrix among the word percentages. We assess reliability based on our selected 210 users' Tweets which further generated a 23,562 response matrix after running the PTSD Linguistic Dictionary for each user. The response matrix yields reliability of .89 based on the uncorrected method, and .96 based on the binary method, which confirm the high reliability of our PTSD Dictionary created PTSD survey based categories. After assessing the reliability of the PTSD Linguistic dictionary, we focus on the two most common forms of construct validity: convergent validity and discriminant validity BIBREF25. Convergent validity provides evidence that two measures designed to assess the same construct are indeed related; discriminate validity involves evidence that two measures designed to assess different constructs are not too strongly related. In theory, we expect that the PTSD Linguistic dictionary should be positively correlated with other negative PTSD constructs to show convergent validity, and not strongly correlated with positive PTSD constructs to show discriminant validity. To test these two types of validity, we use the same 210 users' tweets used for the reliability assessment. The results revealed that the PTSD Linguistic dictionary is indeed positively correlated with negative construct dictionaries, including the overall negative PTSD dictionary (r=3.664,p$<$.001). Table TABREF25 shows all 16 categorical dictionaries. These results provide strong support for the measurement validity for our newly created PTSD Linguistic dictionary.", "" ], [ "We use the exact similar method of LIWC to extract$\\alpha $-scores for each dimension and categories except we use our generated PTSD Linguistic Dictionary for the task BIBREF23. Thus we have total 16$\\alpha $-scores in total. Meanwhile, we propose a new type of feature in this regard, which we called scaling-score ($s$-score).$s$-score is calculated from$\\alpha $-scores. The purpose of using$s$-score is to put exact scores of each of the dimension and category thus we can apply the same method used in real weekly survey system. The idea is, we divide each category into their corresponding scale factor (i.e., for DOSPERT scale, BSSS scale and VIAS scales) and divide them into 8, 3 and 5 scaling factors which are used in real survey system. Then we set the$s$-score from the scaling factors from the$\\alpha $-scores of the corresponding dimension of the questions. The algorithm is stated in Figure FIGREF23. Following Fig FIGREF23, we calculate the$s$-score for each dimension. Then we add up all the$s$-score of the dimensions to calculate cumulative$s$-score of particular categories which is displayed in Fig FIGREF22. Finally, we have total 32 features among them 16 are$\\alpha $-scores and 16 are$s$-scores for each category (i.e. each question). We add both of$\\alpha $and$s$scores together and scale according to their corresponding survey score scales using min-max standardization. Then, the final output is a 16 valued matrix which represent the score for each questions from three different Dryhootch surveys. We use the output to fill up each survey, estimate the prevalence of PTSD and its intensity based on each tool's respective evaluation metric." ], [ "To validate the performance of LAXARY framework, we first divide the entire 210 users' twitter posts into training and test dataset. Then, we first developed PTSD Linguistic Dictionary from the twitter posts from training dataset and apply LAXARY framework on test dataset." ], [ "To provide an initial results, we take 50% of users' last week's (the week they responded of having PTSD) data to develop PTSD Linguistic dictionary and apply LAXARY framework to fill up surveys on rest of 50% dataset. The distribution of this training-test dataset segmentation followed a 50% distribution of PTSD and No PTSD from the original dataset. Our final survey based classification results showed an accuracy of 96% in detecting PTSD and mean squared error of 1.2 in estimating its intensity given we have four intensity, No PTSD, Low Risk PTSD, Moderate Risk PTSD and High Risk PTSD with a score of 0, 1, 2 and 3 respectively. Table TABREF29 shows the classification details of our experiment which provide the very good accuracy of our classification. To compare the outperformance of our method, we also implemented Coppersmith et. al. proposed method and achieved an 86% overall accuracy of detecting PTSD users BIBREF11 following the same training-test dataset distribution. Fig FIGREF28 illustrates the comparisons between LAXARY and Coppersmith et. al. proposed method. Here we can see, the outperformance of our proposed method as well as the importance of$s-score$estimation. We also illustrates the importance of$\\alpha -score$and$S-score$in Fig FIGREF30. Fig FIGREF30 illustrates that if we change the number of training samples (%), LAXARY models outperforms Coppersmith et. al. proposed model under any condition. In terms of intensity, Coppersmith et. al. totally fails to provide any idea however LAXARY provides extremely accurate measures of intensity estimation for PTSD sufferers (as shown in Fig FIGREF31) which can be explained simply providing LAXARY model filled out survey details. Table TABREF29 shows the details of accuracies of both PTSD detection and intensity estimation. Fig FIGREF32 shows the classification accuracy changes over the training sample sizes for each survey which shows that DOSPERT scale outperform other surveys. Fig FIGREF33 shows that if we take previous weeks (instead of only the week diagnosis of PTSD was taken), there are no significant patterns of PTSD detection." ], [ "LAXARY is a highly ambitious model that targets to fill up clinically validated survey tools using only twitter posts. Unlike the previous twitter based mental health assessment tools, LAXARY provides a clinically interpretable model which can provide better classification accuracy and intensity of PTSD assessment and can easily obtain the trust of clinicians. The central challenge of LAXARY is to search twitter users from twitter search engine and manually label them for analysis. While developing PTSD Linguistic Dictionary, although we followed exactly same development idea of LIWC WordStat dictionary and tested reliability and validity, our dictionary was not still validated by domain experts as PTSD detection is highly sensitive issue than stress/depression detection. Moreover, given the extreme challenges of searching veterans in twitter using our selection and inclusion criteria, it was extremely difficult to manually find the evidence of the self-claimed PTSD sufferers. Although, we have shown extremely promising initial findings about the representation of a blackbox model into clinically trusted tools, using only 210 users' data is not enough to come up with a trustworthy model. Moreover, more clinical validation must be done in future with real clinicians to firmly validate LAXARY model provided PTSD assessment outcomes. In future, we aim to collect more data and run not only nationwide but also international-wide data collection to establish our innovation into a real tool. Apart from that, as we achieved promising results in detecting PTSD and its intensity using only twitter data, we aim to develop Linguistic Dictionary for other mental health issues too. Moreover, we will apply our proposed method in other types of mental illness such as depression, bipolar disorder, suicidal ideation and seasonal affective disorder (SAD) etc. As we know, accuracy of particular social media analysis depends on the dataset mostly. We aim to collect more data engaging more researchers to establish a set of mental illness specific Linguistic Database and evaluation technique to solidify the genralizability of our proposed method." ], [ "To promote better comfort to the trauma patients, it is really important to detect Post Traumatic Stress Disorder (PTSD) sufferers in time before going out of control that may result catastrophic impacts on society, people around or even sufferers themselves. Although, psychiatrists invented several clinical diagnosis tools (i.e., surveys) by assessing symptoms, signs and impairment associated with PTSD, most of the times, the process of diagnosis happens at the severe stage of illness which may have already caused some irreversible damages of mental health of the sufferers. On the other hand, due to lack of explainability, existing twitter based methods are not trusted by the clinicians. In this paper, we proposed, LAXARY, a novel method of filling up PTSD assessment surveys using weekly twitter posts. As the clinical surveys are trusted and understandable method, we believe that this method will be able to gain trust of clinicians towards early detection of PTSD. Moreover, our proposed LAXARY model, which is first of its kind, can be used to develop any type of mental disorder Linguistic Dictionary providing a generalized and trustworthy mental health assessment framework of any kind." ] ] } { "question": [ "Do they evaluate only on English datasets?", "Do the authors mention any possible confounds in this study?", "How is the intensity of the PTSD established?", "How is LIWC incorporated into this system?", "How many twitter users are surveyed using the clinically validated survey?", "Which clinically validated survey tools are used?" ], "question_id": [ "53712f0ce764633dbb034e550bb6604f15c0cacd", "0bffc3d82d02910d4816c16b390125e5df55fd01", "bdd8368debcb1bdad14c454aaf96695ac5186b09", "3334f50fe1796ce0df9dd58540e9c08be5856c23", "7081b6909cb87b58a7b85017a2278275be58bf60", "1870f871a5bcea418c44f81f352897a2f53d0971" ], "nlp_background": [ "five", "five", "five", "five", "five", "five" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar" ], "paper_read": [ "no", "no", "no", "no", "no", "no" ], "search_query": [ "twitter", "twitter", "twitter", "twitter", "twitter", "twitter" ], "question_writer": [ "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37" ], "answers": [ { "answer": [ { "unanswerable": true, "extractive_spans": [], "yes_no": null, "free_form_answer": "", "evidence": [], "highlighted_evidence": [] } ], "annotation_id": [ "4e3a79dc56c6f39d1bec7bac257c57f279431967" ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": false, "free_form_answer": "", "evidence": [], "highlighted_evidence": [] } ], "annotation_id": [ "fcf589c48d32bdf0ef4eab547f9ae22412f5805a" ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "Given we have four intensity, No PTSD, Low Risk PTSD, Moderate Risk PTSD and High Risk PTSD with a score of 0, 1, 2 and 3 respectively, the estimated intensity is established as mean squared error.", "evidence": [ "To provide an initial results, we take 50% of users' last week's (the week they responded of having PTSD) data to develop PTSD Linguistic dictionary and apply LAXARY framework to fill up surveys on rest of 50% dataset. The distribution of this training-test dataset segmentation followed a 50% distribution of PTSD and No PTSD from the original dataset. Our final survey based classification results showed an accuracy of 96% in detecting PTSD and mean squared error of 1.2 in estimating its intensity given we have four intensity, No PTSD, Low Risk PTSD, Moderate Risk PTSD and High Risk PTSD with a score of 0, 1, 2 and 3 respectively. Table TABREF29 shows the classification details of our experiment which provide the very good accuracy of our classification. To compare the outperformance of our method, we also implemented Coppersmith et. al. proposed method and achieved an 86% overall accuracy of detecting PTSD users BIBREF11 following the same training-test dataset distribution. Fig FIGREF28 illustrates the comparisons between LAXARY and Coppersmith et. al. proposed method. Here we can see, the outperformance of our proposed method as well as the importance of$s-score$estimation. We also illustrates the importance of$\\alpha -score$and$S-score$in Fig FIGREF30. Fig FIGREF30 illustrates that if we change the number of training samples (%), LAXARY models outperforms Coppersmith et. al. proposed model under any condition. In terms of intensity, Coppersmith et. al. totally fails to provide any idea however LAXARY provides extremely accurate measures of intensity estimation for PTSD sufferers (as shown in Fig FIGREF31) which can be explained simply providing LAXARY model filled out survey details. Table TABREF29 shows the details of accuracies of both PTSD detection and intensity estimation. Fig FIGREF32 shows the classification accuracy changes over the training sample sizes for each survey which shows that DOSPERT scale outperform other surveys. Fig FIGREF33 shows that if we take previous weeks (instead of only the week diagnosis of PTSD was taken), there are no significant patterns of PTSD detection." ], "highlighted_evidence": [ " Our final survey based classification results showed an accuracy of 96% in detecting PTSD and mean squared error of 1.2 in estimating its intensity given we have four intensity, No PTSD, Low Risk PTSD, Moderate Risk PTSD and High Risk PTSD with a score of 0, 1, 2 and 3 respectively. " ] }, { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "defined into four categories from high risk, moderate risk, to low risk", "evidence": [ "There are many clinically validated PTSD assessment tools that are being used both to detect the prevalence of PTSD and its intensity among sufferers. Among all of the tools, the most popular and well accepted one is Domain-Specific Risk-Taking (DOSPERT) Scale BIBREF15. This is a psychometric scale that assesses risk taking in five content domains: financial decisions (separately for investing versus gambling), health/safety, recreational, ethical, and social decisions. Respondents rate the likelihood that they would engage in domain-specific risky activities (Part I). An optional Part II assesses respondents' perceptions of the magnitude of the risks and expected benefits of the activities judged in Part I. There are more scales that are used in risky behavior analysis of individual's daily activities such as, The Berlin Social Support Scales (BSSS) BIBREF16 and Values In Action Scale (VIAS) BIBREF17. Dryhootch America BIBREF18, BIBREF19, a veteran peer support community organization, chooses 5, 6 and 5 questions respectively from the above mentioned survey systems to assess the PTSD among war veterans and consider rest of them as irrelevant to PTSD. The details of dryhootch chosen survey scale are stated in Table TABREF13. Table!TABREF14 shows a sample DOSPERT scale demographic chosen by dryhootch. The threshold (in Table TABREF13) is used to calculate the risky behavior limits. For example, if one individual's weekly DOSPERT score goes over 28, he is in critical situation in terms of risk taking symptoms of PTSD. Dryhootch defines the intensity of PTSD into four categories based on the weekly survey results of all three clinical survey tools (DOSPERT, BSSS and VIAS )", "High risk PTSD: If one individual veteran's weekly PTSD assessment scores go above the threshold for all three PTSD assessment tools i.e. DOSPERT, BSSS and VIAS, then he/she is in high risk situation which needs immediate mental support to avoid catastrophic effect of individual's health or surrounding people's life.", "Moderate risk PTSD: If one individual veteran's weekly PTSD assessment scores go above the threshold for any two of the three PTSD assessment tools, then he/she is in moderate risk situation which needs close observation and peer mentoring to avoid their risk progression.", "Low risk PTSD: If one individual veteran's weekly PTSD assessment scores go above the threshold for any one of the three PTSD assessment tools, then he/she has light symptoms of PTSD.", "No PTSD: If one individual veteran's weekly PTSD assessment scores go below the threshold for all three PTSD assessment tools, then he/she has no PTSD." ], "highlighted_evidence": [ "Dryhootch defines the intensity of PTSD into four categories based on the weekly survey results of all three clinical survey tools (DOSPERT, BSSS and VIAS )\n\nHigh risk PTSD: If one individual veteran's weekly PTSD assessment scores go above the threshold for all three PTSD assessment tools i.e. DOSPERT, BSSS and VIAS, then he/she is in high risk situation which needs immediate mental support to avoid catastrophic effect of individual's health or surrounding people's life.\n\nModerate risk PTSD: If one individual veteran's weekly PTSD assessment scores go above the threshold for any two of the three PTSD assessment tools, then he/she is in moderate risk situation which needs close observation and peer mentoring to avoid their risk progression.\n\nLow risk PTSD: If one individual veteran's weekly PTSD assessment scores go above the threshold for any one of the three PTSD assessment tools, then he/she has light symptoms of PTSD.\n\nNo PTSD: If one individual veteran's weekly PTSD assessment scores go below the threshold for all three PTSD assessment tools, then he/she has no PTSD." ] } ], "annotation_id": [ "5fb7cea5f88219c0c6b7de07c638124a52ef5701", "b62b56730f7536bfcb03b0e784d74674badcc806" ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ " For each user, we calculate the proportion of tweets scored positively by each LIWC category." ], "yes_no": null, "free_form_answer": "", "evidence": [ "A threshold of 1 for$s-scoredivides scores into positive and negative classes. In a multi-class setting, the algorithm minimizes the cross entropy, selecting the model with the highest probability. For each user, we calculate the proportion of tweets scored positively by each LIWC category. These proportions are used as a feature vector in a loglinear regression model BIBREF20. Prior to training, we preprocess the text of each tweet: we replace all usernames with a single token (USER), lowercase all text, and remove extraneous whitespace. We also exclude any tweet that contained a URL, as these often pertain to events external to the user." ], "highlighted_evidence": [ "For each user, we calculate the proportion of tweets scored positively by each LIWC category. " ] }, { "unanswerable": false, "extractive_spans": [ "to calculate the possible scores of each survey question using PTSD Linguistic Dictionary " ], "yes_no": null, "free_form_answer": "", "evidence": [ "LAXARY includes a modified LIWC model to calculate the possible scores of each survey question using PTSD Linguistic Dictionary to fill out the PTSD assessment surveys which provides a practical way not only to determine fine-grained discrimination of physiological and psychological health markers of PTSD without incurring the expensive and laborious in-situ laboratory testing or surveys, but also obtain trusts of clinicians who are expected to see traditional survey results of the PTSD assessment." ], "highlighted_evidence": [ "LAXARY includes a modified LIWC model to calculate the possible scores of each survey question using PTSD Linguistic Dictionary to fill out the PTSD assessment surveys which provides a practical way not only to determine fine-grained discrimination of physiological and psychological health markers of PTSD without incurring the expensive and laborious in-situ laboratory testing or surveys, but also obtain trusts of clinicians who are expected to see traditional survey results of the PTSD assessment." ] } ], "annotation_id": [ "348b89ed7cf9b893cd45d99de412e0f424f97f2a", "9a5f2c8b73ad98f1e28c384471b29b92bcf38de5" ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "210" ], "yes_no": null, "free_form_answer": "", "evidence": [ "We download 210 users' all twitter posts who are war veterans and clinically diagnosed with PTSD sufferers as well which resulted a total 12,385 tweets. Fig FIGREF16 shows each of the 210 veteran twitter users' monthly average tweets. We categorize these Tweets into two groups: Tweets related to work and Tweets not related to work. That is, only the Tweets that use a form of the word “work*” (e.g. work,worked, working, worker, etc.) or “job*” (e.g. job, jobs, jobless, etc.) are identified as work-related Tweets, with the remaining categorized as non-work-related Tweets. This categorization method increases the likelihood that most Tweets in the work group are indeed talking about work or job; for instance, “Back to work. Projects are firing back up and moving ahead now that baseball is done.” This categorization results in 456 work-related Tweets, about 5.4% of all Tweets written in English (and 75 unique Twitter users). To conduct weekly-level analysis, we consider three categorizations of Tweets (i.e. overall Tweets, work-related Tweets, and non work-related Tweets) on a daily basis, and create a text file for each week for each group." ], "highlighted_evidence": [ "We download 210 users' all twitter posts who are war veterans and clinically diagnosed with PTSD sufferers as well which resulted a total 12,385 tweets. " ] } ], "annotation_id": [ "10d346425fb3693cdf36e224fb28ca37d57b71a0" ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "DOSPERT, BSSS and VIAS" ], "yes_no": null, "free_form_answer": "", "evidence": [ "We use an automated regular expression based searching to find potential veterans with PTSD in twitter, and then refine the list manually. First, we select different keywords to search twitter users of different categories. For example, to search self-claimed diagnosed PTSD sufferers, we select keywords related to PTSD for example, post trauma, post traumatic disorder, PTSD etc. We use a regular expression to search for statements where the user self-identifies as being diagnosed with PTSD. For example, Table TABREF27 shows a self-identified tweet posts. To search veterans, we mostly visit to different twitter accounts of veterans organizations such as \"MA Women Veterans @WomenVeterans\", \"Illinois Veterans @ILVetsAffairs\", \"Veterans Benefits @VAVetBenefits\" etc. We define an inclusion criteria as follows: one twitter user will be part of this study if he/she describes himself/herself as a veteran in the introduction and have at least 25 tweets in last week. After choosing the initial twitter users, we search for self-identified PTSD sufferers who claim to be diagnosed with PTSD in their twitter posts. We find 685 matching tweets which are manually reviewed to determine if they indicate a genuine statement of a diagnosis for PTSD. Next, we select the username that authored each of these tweets and retrieve last week's tweets via the Twitter API. We then filtered out users with less than 25 tweets and those whose tweets were not at least 75% in English (measured using an automated language ID system.) This filtering left us with 305 users as positive examples. We repeated this process for a group of randomly selected users. We randomly selected 3,000 twitter users who are veterans as per their introduction and have at least 25 tweets in last one week. After filtering (as above) in total 2,423 users remain, whose tweets are used as negative examples developing a 2,728 user's entire weeks' twitter posts where 305 users are self-claimed PTSD sufferers. We distributed Dryhootch chosen surveys among 1,200 users (305 users are self claimed PTSD sufferers and rest of them are randomly chosen from previous 2,423 users) and received 210 successful responses. Among these responses, 92 users were diagnosed as PTSD by any of the three surveys and rest of the 118 users are diagnosed with NO PTSD. Among the clinically diagnosed PTSD sufferers, 17 of them were not self-identified before. However, 7 of the self-identified PTSD sufferers are assessed with no PTSD by PTSD assessment tools. The response rates of PTSD and NO PTSD users are 27% and 12%. In summary, we have collected one week of tweets from 2,728 veterans where 305 users claimed to have diagnosed with PTSD. After distributing Dryhootch surveys, we have a dataset of 210 veteran twitter users among them 92 users are assessed with PTSD and 118 users are diagnosed with no PTSD using clinically validated surveys. The severity of the PTSD are estimated as Non-existent, light, moderate and high PTSD based on how many surveys support the existence of PTSD among the participants according to dryhootch manual BIBREF18, BIBREF19.", "There are many clinically validated PTSD assessment tools that are being used both to detect the prevalence of PTSD and its intensity among sufferers. Among all of the tools, the most popular and well accepted one is Domain-Specific Risk-Taking (DOSPERT) Scale BIBREF15. This is a psychometric scale that assesses risk taking in five content domains: financial decisions (separately for investing versus gambling), health/safety, recreational, ethical, and social decisions. Respondents rate the likelihood that they would engage in domain-specific risky activities (Part I). An optional Part II assesses respondents' perceptions of the magnitude of the risks and expected benefits of the activities judged in Part I. There are more scales that are used in risky behavior analysis of individual's daily activities such as, The Berlin Social Support Scales (BSSS) BIBREF16 and Values In Action Scale (VIAS) BIBREF17. Dryhootch America BIBREF18, BIBREF19, a veteran peer support community organization, chooses 5, 6 and 5 questions respectively from the above mentioned survey systems to assess the PTSD among war veterans and consider rest of them as irrelevant to PTSD. The details of dryhootch chosen survey scale are stated in Table TABREF13. Table!TABREF14 shows a sample DOSPERT scale demographic chosen by dryhootch. The threshold (in Table TABREF13) is used to calculate the risky behavior limits. For example, if one individual's weekly DOSPERT score goes over 28, he is in critical situation in terms of risk taking symptoms of PTSD. Dryhootch defines the intensity of PTSD into four categories based on the weekly survey results of all three clinical survey tools (DOSPERT, BSSS and VIAS )" ], "highlighted_evidence": [ "We distributed Dryhootch chosen surveys among 1,200 users (305 users are self claimed PTSD sufferers and rest of them are randomly chosen from previous 2,423 users) and received 210 successful responses. ", "Dryhootch defines the intensity of PTSD into four categories based on the weekly survey results of all three clinical survey tools (DOSPERT, BSSS and VIAS )" ] } ], "annotation_id": [ "6185d05f806ff3e054ec5bb7fd773679b7fbb6d9" ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] } ] } 2003.12218 Comprehensive Named Entity Recognition on CORD-19 with Distant or Weak Supervision We created this CORD-19-NER dataset with comprehensive named entity recognition (NER) on the COVID-19 Open Research Dataset Challenge (CORD-19) corpus (2020- 03-13). This CORD-19-NER dataset covers 74 fine-grained named entity types. It is automatically generated by combining the annotation results from four sources: (1) pre-trained NER model on 18 general entity types from Spacy, (2) pre-trained NER model on 18 biomedical entity types from SciSpacy, (3) knowledge base (KB)-guided NER model on 127 biomedical entity types with our distantly-supervised NER method, and (4) seed-guided NER model on 8 new entity types (specifically related to the COVID-19 studies) with our weakly-supervised NER method. We hope this dataset can help the text mining community build downstream applications. We also hope this dataset can bring insights for the COVID- 19 studies, both on the biomedical side and on the social side. { "section_name": [ "Introduction", "CORD-19-NER Dataset ::: Corpus", "CORD-19-NER Dataset ::: NER Methods", "Results ::: NER Annotation Results", "Results ::: Top-Frequent Entity Summarization", "Conclusion", "Acknowledgment" ], "paragraphs": [ [ "Coronavirus disease 2019 (COVID-19) is an infectious disease caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). The disease was first identified in 2019 in Wuhan, Central China, and has since spread globally, resulting in the 2019–2020 coronavirus pandemic. On March 16th, 2020, researchers and leaders from the Allen Institute for AI, Chan Zuckerberg Initiative (CZI), Georgetown University’s Center for Security and Emerging Technology (CSET), Microsoft, and the National Library of Medicine (NLM) at the National Institutes of Health released the COVID-19 Open Research Dataset (CORD-19) of scholarly literature about COVID-19, SARS-CoV-2, and the coronavirus group.", "Named entity recognition (NER) is a fundamental step in text mining system development to facilitate the COVID-19 studies. There is critical need for NER methods that can quickly adapt to all the COVID-19 related new types without much human effort for training data annotation. We created this CORD-19-NER dataset with comprehensive named entity annotation on the CORD-19 corpus (2020-03-13). This dataset covers 75 fine-grained named entity types. CORD-19-NER is automatically generated by combining the annotation results from four sources. In the following sections, we introduce the details of CORD-19-NER dataset construction. We also show some NER annotation results in this dataset." ], [ "The corpus is generated from the 29,500 documents in the CORD-19 corpus (2020-03-13). We first merge all the meta-data (all_sources_metadata_2020-03-13.csv) with their corresponding full-text papers. Then we create a tokenized corpus (CORD-19-corpus.json) for further NER annotations.", "Corpus Tokenization. The raw corpus is a combination of the “title\", “abstract\" and “full-text\" from the CORD-19 corpus. We first conduct automatic phrase mining on the raw corpus using AutoPhrase BIBREF0. Then we do the second round of tokenization with Spacy on the phrase-replaced corpus. We have observed that keeping the AutoPhrase results will significantly improve the distantly- and weakly-supervised NER performance.", "Key Items. The tokenized corpus includes the following items:", "doc_id: the line number (0-29499) in “all_sources_metadata_2020-03-13.csv\" in the CORD-19 corpus (2020-03-13).", "sents: [sent_id, sent_tokens], tokenized sentences and words as described above.", "source: CZI (1236 records), PMC (27337), bioRxiv (566) and medRxiv (361).", "doi: populated for all BioRxiv/MedRxiv paper records and most of the other records (26357 non null).", "pmcid: populated for all PMC paper records (27337 non null).", "pubmed_id: populated for some of the records.", "Other keys: publish_time, authors and journal.", "The tokenized corpus (CORD-19-corpus.json) with the file schema and detailed descriptions can be found in our CORD-19-NER dataset." ], [ "CORD-19-NER annotation is a combination from four sources with different NER methods:", "Pre-trained NER on 18 general entity types from Spacy using the model “en_core_web_sm\".", "Pre-trained NER on 18 biomedical entity types from SciSpacy using the model “en_ner_bionlp13cg_md\".", "Knowledge base (KB)-guided NER on 127 biomedical entity types with our distantly-supervised NER methods BIBREF1, BIBREF2. We do not require any human annotated training data for the NER model training. Instead, We rely on UMLS as the input KB for distant supervision.", "Seed-guided NER on 9 new entity types (specifically related to the COVID-19 studies) with our weakly-supervised NER method. We only require several (10-20) human-input seed entities for each new type. Then we expand the seed entity sets with CatE BIBREF3 and apply our distant NER method for the new entity type recognition.", "The 9 new entity types with examples of their input seed are as follows:", "Coronavirus: COVID-19, SARS, MERS, etc.", "Viral Protein: Hemagglutinin, GP120, etc.", "Livestock: cattle, sheep, pig, etc.", "Wildlife: bats, wild animals, wild birds, etc", "Evolution: genetic drift, natural selection, mutation rate, etc", "Physical Science: atomic charge, Amber force fields, Van der Waals interactions, etc.", "Substrate: blood, sputum, urine, etc.", "Material: copper, stainless steel, plastic, etc.", "Immune Response: adaptive immune response, cell mediated immunity, innate immunity, etc.", "We merged all the entity types from the four sources and reorganized them into one entity type hierarchy. Specifically, we align all the types from SciSpacy to UMLS. We also merge some fine-grained UMLS entity types to their more coarse-grained types based on the corpus count. Then we get a final entity type hierarchy with 75 fine-grained entity types used in our annotations. The entity type hierarchy (CORD-19-types.xlsx) can be found in our CORD-19-NER dataset.", "Then we conduct named entity annotation with the four NER methods on the 75 fine-grained entity types. After we get the NER annotation results with the four different methods, we merge the results into one file. The conflicts are resolved by giving priority to different entity types annotated by different methods according to their annotation quality. The final entity annotation results (CORD-19-ner.json) with the file schema and detailed descriptions can be found in our CORD-19-NER dataset." ], [ "In Figure FIGREF28, we show some examples of the annotation results in CORD-19-NER. We can see that our distantly- or weakly supervised methods achieve high quality recognizing the new entity types, requiring only several seed examples as the input. For example, we recognized “SARS-CoV-2\" as the “CORONAVIRUS\" type, “bat\" and “pangolins\" as the “WILDLIFE\" type and “Van der Waals forces\" as the “PHYSICAL_SCIENCE\" type. This NER annotation results help downstream text mining tasks in discovering the origin and the physical nature of the virus. Our NER methods are domain-independent that can be applied to corpus in different domains. In addition, we show another example of NER annotation on New York Times with our system in Figure FIGREF29.", "In Figure FIGREF30, we show the comparison of our annotation results with existing NER/BioNER systems. In Figure FIGREF30, we can see that only our method can identify “SARS-CoV-2\" as a coronavirus. In Figure FIGREF30, we can see that our method can identify many more entities such as “pylogenetic\" as a evolution term and “bat\" as a wildlife. In Figure FIGREF30, we can also see that our method can identify many more entities such as “racism\" as a social behavior. In summary, our distantly- and weakly-supervised NER methods are reliable for high-quality entity recognition without requiring human effort for training data annotation." ], [ "In Table TABREF34, we show some examples of the most frequent entities in the annotated corpus. Specifically, we show the entity types including both our new types and some UMLS types that have not been manually annotated before. We find our annotated entities very informative for the COVID-19 studies. For example, the most frequent entities for the type “SIGN_OR_SYMPTOM behavior\" includes “cough\" and “respiratory symptoms\" that are the most common symptoms for COVID-19 . The most frequent entities for the type “INDIVIDUAL_BEHAVIOR\" includes “hand hygiene\", “disclosures\" and “absenteeism\", which indicates that people focus more on hand cleaning for the COVID-19 issue. Also, the most frequent entities for the type “MACHINE_ACTIVITY\" includes “machine learning\", “data processing\" and “automation\", which indicates that people focus more on the automated methods that can process massive data for the COVID-19 studies. This type also includes “telecommunication\" as the top results, which is quite reasonable under the current COVID-19 situation. More examples can be found in our dataset." ], [ "In the future, we will further improve the CORD-19-NER dataset quality. We will also build text mining systems based on the CORD-19-NER dataset with richer functionalities. We hope this dataset can help the text mining community build downstream applications. We also hope this dataset can bring insights for the COVID-19 studies, both on the biomedical side and on the social side." ], [ "Research was sponsored in part by US DARPA KAIROS Program No. FA8750-19-2-1004 and SocialSim Program No. W911NF-17-C-0099, National Science Foundation IIS 16-18481, IIS 17-04532, and IIS-17-41317, and DTRA HDTRA11810026. Any opinions, findings, and conclusions or recommendations expressed herein are those of the authors and should not be interpreted as necessarily representing the views, either expressed or implied, of DARPA or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for government purposes notwithstanding any copyright annotation hereon. The views and conclusions contained in this paper are those of the authors and should not be interpreted as representing any funding agencies." ] ] } { "question": [ "Did they experiment with the dataset?", "What is the size of this dataset?", "Do they list all the named entity types present?" ], "question_id": [ "ce6201435cc1196ad72b742db92abd709e0f9e8d", "928828544e38fe26c53d81d1b9c70a9fb1cc3feb", "4f243056e63a74d1349488983dc1238228ca76a7" ], "nlp_background": [ "", "", "" ], "topic_background": [ "", "", "" ], "paper_read": [ "", "", "" ], "search_query": [ "", "", "" ], "question_writer": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": true, "free_form_answer": "", "evidence": [ "In Figure FIGREF28, we show some examples of the annotation results in CORD-19-NER. We can see that our distantly- or weakly supervised methods achieve high quality recognizing the new entity types, requiring only several seed examples as the input. For example, we recognized “SARS-CoV-2\" as the “CORONAVIRUS\" type, “bat\" and “pangolins\" as the “WILDLIFE\" type and “Van der Waals forces\" as the “PHYSICAL_SCIENCE\" type. This NER annotation results help downstream text mining tasks in discovering the origin and the physical nature of the virus. Our NER methods are domain-independent that can be applied to corpus in different domains. In addition, we show another example of NER annotation on New York Times with our system in Figure FIGREF29.", "In Figure FIGREF30, we show the comparison of our annotation results with existing NER/BioNER systems. In Figure FIGREF30, we can see that only our method can identify “SARS-CoV-2\" as a coronavirus. In Figure FIGREF30, we can see that our method can identify many more entities such as “pylogenetic\" as a evolution term and “bat\" as a wildlife. In Figure FIGREF30, we can also see that our method can identify many more entities such as “racism\" as a social behavior. In summary, our distantly- and weakly-supervised NER methods are reliable for high-quality entity recognition without requiring human effort for training data annotation." ], "highlighted_evidence": [ "In Figure FIGREF28, we show some examples of the annotation results in CORD-19-NER. We can see that our distantly- or weakly supervised methods achieve high quality recognizing the new entity types, requiring only several seed examples as the input. For example, we recognized “SARS-CoV-2\" as the “CORONAVIRUS\" type, “bat\" and “pangolins\" as the “WILDLIFE\" type and “Van der Waals forces\" as the “PHYSICAL_SCIENCE\" type. This NER annotation results help downstream text mining tasks in discovering the origin and the physical nature of the virus. Our NER methods are domain-independent that can be applied to corpus in different domains. In addition, we show another example of NER annotation on New York Times with our system in Figure FIGREF29.\n\nIn Figure FIGREF30, we show the comparison of our annotation results with existing NER/BioNER systems. In Figure FIGREF30, we can see that only our method can identify “SARS-CoV-2\" as a coronavirus. In Figure FIGREF30, we can see that our method can identify many more entities such as “pylogenetic\" as a evolution term and “bat\" as a wildlife. In Figure FIGREF30, we can also see that our method can identify many more entities such as “racism\" as a social behavior. In summary, our distantly- and weakly-supervised NER methods are reliable for high-quality entity recognition without requiring human effort for training data annotation." ] } ], "annotation_id": [ "2d5e1221e7cd30341b51ddb988b8659b48b7ac2b" ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "29,500 documents" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Named entity recognition (NER) is a fundamental step in text mining system development to facilitate the COVID-19 studies. There is critical need for NER methods that can quickly adapt to all the COVID-19 related new types without much human effort for training data annotation. We created this CORD-19-NER dataset with comprehensive named entity annotation on the CORD-19 corpus (2020-03-13). This dataset covers 75 fine-grained named entity types. CORD-19-NER is automatically generated by combining the annotation results from four sources. In the following sections, we introduce the details of CORD-19-NER dataset construction. We also show some NER annotation results in this dataset.", "The corpus is generated from the 29,500 documents in the CORD-19 corpus (2020-03-13). We first merge all the meta-data (all_sources_metadata_2020-03-13.csv) with their corresponding full-text papers. Then we create a tokenized corpus (CORD-19-corpus.json) for further NER annotations." ], "highlighted_evidence": [ "We created this CORD-19-NER dataset with comprehensive named entity annotation on the CORD-19 corpus (2020-03-13). ", "The corpus is generated from the 29,500 documents in the CORD-19 corpus (2020-03-13). " ] }, { "unanswerable": false, "extractive_spans": [ "29,500 documents in the CORD-19 corpus (2020-03-13)" ], "yes_no": null, "free_form_answer": "", "evidence": [ "The corpus is generated from the 29,500 documents in the CORD-19 corpus (2020-03-13). We first merge all the meta-data (all_sources_metadata_2020-03-13.csv) with their corresponding full-text papers. Then we create a tokenized corpus (CORD-19-corpus.json) for further NER annotations." ], "highlighted_evidence": [ "The corpus is generated from the 29,500 documents in the CORD-19 corpus (2020-03-13). We first merge all the meta-data (all_sources_metadata_2020-03-13.csv) with their corresponding full-text papers. Then we create a tokenized corpus (CORD-19-corpus.json) for further NER annotations.\n\n" ] } ], "annotation_id": [ "1466a1bd3601c1b1cdedab1edb1bca2334809e3d", "cd982553050caaa6fd8dabefe8b9697b05f5cf94" ], "worker_id": [ "5053f146237e8fc8859ed3984b5d3f02f39266b7", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": false, "free_form_answer": "", "evidence": [ "FLOAT SELECTED: Table 2: Examples of the most frequent entities annotated in CORD-NER." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 2: Examples of the most frequent entities annotated in CORD-NER." ] } ], "annotation_id": [ "bd64f676b7b1d47ad86c5c897acfe759c2259269" ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] } ] } 1904.09678 UniSent: Universal Adaptable Sentiment Lexica for 1000+ Languages In this paper, we introduce UniSent a universal sentiment lexica for 1000 languages created using an English sentiment lexicon and a massively parallel corpus in the Bible domain. To the best of our knowledge, UniSent is the largest sentiment resource to date in terms of number of covered languages, including many low resource languages. To create UniSent, we propose Adapted Sentiment Pivot, a novel method that combines annotation projection, vocabulary expansion, and unsupervised domain adaptation. We evaluate the quality of UniSent for Macedonian, Czech, German, Spanish, and French and show that its quality is comparable to manually or semi-manually created sentiment resources. With the publication of this paper, we release UniSent lexica as well as Adapted Sentiment Pivot related codes. method. { "section_name": [ "Introduction", "Method", "Experimental Setup", "Results", "Conclusion" ], "paragraphs": [ [ "Sentiment classification is an important task which requires either word level or document level sentiment annotations. Such resources are available for at most 136 languages BIBREF0 , preventing accurate sentiment classification in a low resource setup. Recent research efforts on cross-lingual transfer learning enable to train models in high resource languages and transfer this information into other, low resource languages using minimal bilingual supervision BIBREF1 , BIBREF2 , BIBREF3 . Besides that, little effort has been spent on the creation of sentiment lexica for low resource languages (e.g., BIBREF0 , BIBREF4 , BIBREF5 ). We create and release Unisent, the first massively cross-lingual sentiment lexicon in more than 1000 languages. An extensive evaluation across several languages shows that the quality of Unisent is close to manually created resources. Our method is inspired by BIBREF6 with a novel combination of vocabulary expansion and domain adaptation using embedding spaces. Similar to our work, BIBREF7 also use massively parallel corpora to project POS tags and dependency relations across languages. However, their approach is based on assignment of the most probable label according to the alignment model from the source to the target language and does not include any vocabulary expansion or domain adaptation and do not use the embedding graphs." ], [ "Our method, Adapted Sentiment Pivot requires a sentiment lexicon in one language (e.g. English) as well as a massively parallel corpus. Following steps are performed on this input." ], [ "Our goal is to evaluate the quality of UniSent against several manually created sentiment lexica in different domains to ensure its quality for the low resource languages. We do this in several steps.", "As the gold standard sentiment lexica, we chose manually created lexicon in Czech BIBREF11 , German BIBREF12 , French BIBREF13 , Macedonian BIBREF14 , and Spanish BIBREF15 . These lexica contain general domain words (as opposed to Twitter or Bible). As gold standard for twitter domain we use emoticon dataset and perform emoticon sentiment prediction BIBREF16 , BIBREF17 .", "We use the (manually created) English sentiment lexicon (WKWSCI) in BIBREF18 as a resource to be projected over languages. For the projection step (Section SECREF1 ) we use the massively parallel Bible corpus in BIBREF8 . We then propagate the projected sentiment polarities to all words in the Wikipedia corpus. We chose Wikipedia here because its domain is closest to the manually annotated sentiment lexica we use to evaluate UniSent. In the adaptation step, we compute the shift between the vocabularies in the Bible and Wikipedia corpora. To show that our adaptation method also works well on domains like Twitter, we propose a second evaluation in which we use Adapted Sentiment Pivot to predict the sentiment of emoticons in Twitter.", "To create our test sets, we first split UniSent and our gold standard lexica as illustrated in Figure FIGREF11 . We then form our training and test sets as follows:", "(i) UniSent-Lexicon: we use words in UniSent for the sentiment learning in the target domain; for this purpose, we use words INLINEFORM0 .", "(ii) Baseline-Lexicon: we use words in the gold standard lexicon for the sentiment learning in the target domain; for this purpose we use words INLINEFORM0 .", "(iii) Evaluation-Lexicon: we randomly exclude a set of words the baseline-lexicon INLINEFORM0 . In selection of the sampling size we make sure that INLINEFORM1 and INLINEFORM2 would contain a comparable number of words.", "" ], [ "In Table TABREF13 we compare the quality of UniSent with the Baseline-Lexicon as well as with the gold standard lexicon for general domain data. The results show that (i) UniSent clearly outperforms the baseline for all languages (ii) the quality of UniSent is close to manually annotated data (iii) the domain adaptation method brings small improvements for morphologically poor languages. The modest gains could be because our drift weighting method (Section SECREF3 ) mainly models a sense shift between words which is not always equivalent to a polarity shift.", "In Table TABREF14 we compare the quality of UniSent with the gold standard emoticon lexicon in the Twitter domain. The results show that (i) UniSent clearly outperforms the baseline and (ii) our domain adaptation technique brings small improvements for French and Spanish." ], [ "Using our novel Adapted Sentiment Pivot method, we created UniSent, a sentiment lexicon covering over 1000 (including many low-resource) languages in several domains. The only necessary resources to create UniSent are a sentiment lexicon in any language and a massively parallel corpus that can be small and domain specific. Our evaluation showed that the quality of UniSent is closed to manually annotated resources.", " " ] ] } { "question": [ "how is quality measured?", "how many languages exactly is the sentiment lexica for?", "what sentiment sources do they compare with?" ], "question_id": [ "8f87215f4709ee1eb9ddcc7900c6c054c970160b", "b04098f7507efdffcbabd600391ef32318da28b3", "8fc14714eb83817341ada708b9a0b6b4c6ab5023" ], "nlp_background": [ "", "", "" ], "topic_background": [ "", "", "" ], "paper_read": [ "", "", "" ], "search_query": [ "", "", "" ], "question_writer": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "Accuracy and the macro-F1 (averaged F1 over positive and negative classes) are used as a measure of quality.", "evidence": [ "FLOAT SELECTED: Table 1: Comparison of manually created lexicon performance with UniSent in Czech, German, French, Macedonians, and Spanish. We report accuracy and the macro-F1 (averaged F1 over positive and negative classes). The baseline is constantly considering the majority label. The last two columns indicate the performance of UniSent after drift weighting." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Comparison of manually created lexicon performance with UniSent in Czech, German, French, Macedonians, and Spanish. We report accuracy and the macro-F1 (averaged F1 over positive and negative classes). The baseline is constantly considering the majority label. The last two columns indicate the performance of UniSent after drift weighting." ] }, { "unanswerable": true, "extractive_spans": [], "yes_no": null, "free_form_answer": "", "evidence": [], "highlighted_evidence": [] } ], "annotation_id": [ "97009bed24107de806232d7cf069f51053d7ba5e", "e38ed05ec140abd97006a8fa7af9a7b4930247df" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "answer": [ { "unanswerable": true, "extractive_spans": [], "yes_no": null, "free_form_answer": "", "evidence": [], "highlighted_evidence": [] } ], "annotation_id": [ "d1204f71bd3c78a11b133016f54de78e8eaecf6e" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "manually created lexicon in Czech BIBREF11 , German BIBREF12 , French BIBREF13 , Macedonian BIBREF14 , and Spanish BIBREF15" ], "yes_no": null, "free_form_answer": "", "evidence": [ "As the gold standard sentiment lexica, we chose manually created lexicon in Czech BIBREF11 , German BIBREF12 , French BIBREF13 , Macedonian BIBREF14 , and Spanish BIBREF15 . These lexica contain general domain words (as opposed to Twitter or Bible). As gold standard for twitter domain we use emoticon dataset and perform emoticon sentiment prediction BIBREF16 , BIBREF17 ." ], "highlighted_evidence": [ "As the gold standard sentiment lexica, we chose manually created lexicon in Czech BIBREF11 , German BIBREF12 , French BIBREF13 , Macedonian BIBREF14 , and Spanish BIBREF15 ." ] } ], "annotation_id": [ "17db53c0c6f13fe1d43eee276a9554677f007eef" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ] } 2003.06651 Word Sense Disambiguation for 158 Languages using Word Embeddings Only Disambiguation of word senses in context is easy for humans, but is a major challenge for automatic approaches. Sophisticated supervised and knowledge-based models were developed to solve this task. However, (i) the inherent Zipfian distribution of supervised training instances for a given word and/or (ii) the quality of linguistic knowledge representations motivate the development of completely unsupervised and knowledge-free approaches to word sense disambiguation (WSD). They are particularly useful for under-resourced languages which do not have any resources for building either supervised and/or knowledge-based models. In this paper, we present a method that takes as input a standard pre-trained word embedding model and induces a fully-fledged word sense inventory, which can be used for disambiguation in context. We use this method to induce a collection of sense inventories for 158 languages on the basis of the original pre-trained fastText word embeddings by Grave et al. (2018), enabling WSD in these languages. Models and system are available online. { "section_name": [ "", " ::: ", " ::: ::: ", "Introduction", "Related Work", "Algorithm for Word Sense Induction", "Algorithm for Word Sense Induction ::: SenseGram: A Baseline Graph-based Word Sense Induction Algorithm", "Algorithm for Word Sense Induction ::: egvi (Ego-Graph Vector Induction): A Novel Word Sense Induction Algorithm ::: Induction of Sense Inventories", "Algorithm for Word Sense Induction ::: egvi (Ego-Graph Vector Induction): A Novel Word Sense Induction Algorithm ::: Labelling of Induced Senses", "Algorithm for Word Sense Induction ::: egvi (Ego-Graph Vector Induction): A Novel Word Sense Induction Algorithm ::: Word Sense Disambiguation", "System Design", "System Design ::: Construction of Sense Inventories", "System Design ::: Word Sense Disambiguation System", "Evaluation", "Evaluation ::: Lexical Similarity and Relatedness ::: Experimental Setup", "Evaluation ::: Lexical Similarity and Relatedness ::: Discussion of Results", "Evaluation ::: Word Sense Disambiguation", "Evaluation ::: Word Sense Disambiguation ::: Experimental Setup", "Evaluation ::: Word Sense Disambiguation ::: Discussion of Results", "Evaluation ::: Analysis", "Conclusions and Future Work", "Acknowledgements" ], "paragraphs": [ [ "1.1em" ], [ "1.1.1em" ], [ "1.1.1.1em", "ru=russian", "", "^1$Skolkovo Institute of Science and Technology, Moscow, Russia", "v.logacheva@skoltech.ru", "$^2$Ural Federal University, Yekaterinburg, Russia", "$^3$Universität Hamburg, Hamburg, Germany", "$^4$Universität Mannheim, Mannheim, Germany", "$^5$University of Oslo, Oslo, Norway", "$^6$Higher School of Economics, Moscow, Russia", "Disambiguation of word senses in context is easy for humans, but is a major challenge for automatic approaches. Sophisticated supervised and knowledge-based models were developed to solve this task. However, (i) the inherent Zipfian distribution of supervised training instances for a given word and/or (ii) the quality of linguistic knowledge representations motivate the development of completely unsupervised and knowledge-free approaches to word sense disambiguation (WSD). They are particularly useful for under-resourced languages which do not have any resources for building either supervised and/or knowledge-based models. In this paper, we present a method that takes as input a standard pre-trained word embedding model and induces a fully-fledged word sense inventory, which can be used for disambiguation in context. We use this method to induce a collection of sense inventories for 158 languages on the basis of the original pre-trained fastText word embeddings by Grave:18, enabling WSD in these languages. Models and system are available online.", "word sense induction, word sense disambiguation, word embeddings, sense embeddings, graph clustering" ], [ "There are many polysemous words in virtually any language. If not treated as such, they can hamper the performance of all semantic NLP tasks BIBREF0. Therefore, the task of resolving the polysemy and choosing the most appropriate meaning of a word in context has been an important NLP task for a long time. It is usually referred to as Word Sense Disambiguation (WSD) and aims at assigning meaning to a word in context.", "The majority of approaches to WSD are based on the use of knowledge bases, taxonomies, and other external manually built resources BIBREF1, BIBREF2. However, different senses of a polysemous word occur in very diverse contexts and can potentially be discriminated with their help. The fact that semantically related words occur in similar contexts, and diverse words do not share common contexts, is known as distributional hypothesis and underlies the technique of constructing word embeddings from unlabelled texts. The same intuition can be used to discriminate between different senses of individual words. There exist methods of training word embeddings that can detect polysemous words and assign them different vectors depending on their contexts BIBREF3, BIBREF4. Unfortunately, many wide-spread word embedding models, such as GloVe BIBREF5, word2vec BIBREF6, fastText BIBREF7, do not handle polysemous words. Words in these models are represented with single vectors, which were constructed from diverse sets of contexts corresponding to different senses. In such cases, their disambiguation needs knowledge-rich approaches.", "We tackle this problem by suggesting a method of post-hoc unsupervised WSD. It does not require any external knowledge and can separate different senses of a polysemous word using only the information encoded in pre-trained word embeddings. We construct a semantic similarity graph for words and partition it into densely connected subgraphs. This partition allows for separating different senses of polysemous words. Thus, the only language resource we need is a large unlabelled text corpus used to train embeddings. This makes our method applicable to under-resourced languages. Moreover, while other methods of unsupervised WSD need to train embeddings from scratch, we perform retrofitting of sense vectors based on existing word embeddings.", "We create a massively multilingual application for on-the-fly word sense disambiguation. When receiving a text, the system identifies its language and performs disambiguation of all the polysemous words in it based on pre-extracted word sense inventories. The system works for 158 languages, for which pre-trained fastText embeddings available BIBREF8. The created inventories are based on these embeddings. To the best of our knowledge, our system is the only WSD system for the majority of the presented languages. Although it does not match the state of the art for resource-rich languages, it is fully unsupervised and can be used for virtually any language.", "The contributions of our work are the following:", "[noitemsep]", "We release word sense inventories associated with fastText embeddings for 158 languages.", "We release a system that allows on-the-fly word sense disambiguation for 158 languages.", "We present egvi (Ego-Graph Vector Induction), a new algorithm of unsupervised word sense induction, which creates sense inventories based on pre-trained word vectors." ], [ "There are two main scenarios for WSD: the supervised approach that leverages training corpora explicitly labelled for word sense, and the knowledge-based approach that derives sense representation from lexical resources, such as WordNet BIBREF9. In the supervised case WSD can be treated as a classification problem. Knowledge-based approaches construct sense embeddings, i.e. embeddings that separate various word senses.", "SupWSD BIBREF10 is a state-of-the-art system for supervised WSD. It makes use of linear classifiers and a number of features such as POS tags, surrounding words, local collocations, word embeddings, and syntactic relations. GlossBERT model BIBREF11, which is another implementation of supervised WSD, achieves a significant improvement by leveraging gloss information. This model benefits from sentence-pair classification approach, introduced by Devlin:19 in their BERT contextualized embedding model. The input to the model consists of a context (a sentence which contains an ambiguous word) and a gloss (sense definition) from WordNet. The context-gloss pair is concatenated through a special token ([SEP]) and classified as positive or negative.", "On the other hand, sense embeddings are an alternative to traditional word vector models such as word2vec, fastText or GloVe, which represent monosemous words well but fail for ambiguous words. Sense embeddings represent individual senses of polysemous words as separate vectors. They can be linked to an explicit inventory BIBREF12 or induce a sense inventory from unlabelled data BIBREF13. LSTMEmbed BIBREF13 aims at learning sense embeddings linked to BabelNet BIBREF14, at the same time handling word ordering, and using pre-trained embeddings as an objective. Although it was tested only on English, the approach can be easily adapted to other languages present in BabelNet. However, manually labelled datasets as well as knowledge bases exist only for a small number of well-resourced languages. Thus, to disambiguate polysemous words in other languages one has to resort to fully unsupervised techniques.", "The task of Word Sense Induction (WSI) can be seen as an unsupervised version of WSD. WSI aims at clustering word senses and does not require to map each cluster to a predefined sense. Instead of that, word sense inventories are induced automatically from the clusters, treating each cluster as a single sense of a word. WSI approaches fall into three main groups: context clustering, word ego-network clustering and synonyms (or substitute) clustering.", "Context clustering approaches consist in creating vectors which characterise words' contexts and clustering these vectors. Here, the definition of context may vary from window-based context to latent topic-alike context. Afterwards, the resulting clusters are either used as senses directly BIBREF15, or employed further to learn sense embeddings via Chinese Restaurant Process algorithm BIBREF16, AdaGram, a Bayesian extension of the Skip-Gram model BIBREF17, AutoSense, an extension of the LDA topic model BIBREF18, and other techniques.", "Word ego-network clustering is applied to semantic graphs. The nodes of a semantic graph are words, and edges between them denote semantic relatedness which is usually evaluated with cosine similarity of the corresponding embeddings BIBREF19 or by PMI-like measures BIBREF20. Word senses are induced via graph clustering algorithms, such as Chinese Whispers BIBREF21 or MaxMax BIBREF22. The technique suggested in our work belongs to this class of methods and is an extension of the method presented by Pelevina:16.", "Synonyms and substitute clustering approaches create vectors which represent synonyms or substitutes of polysemous words. Such vectors are created using synonymy dictionaries BIBREF23 or context-dependent substitutes obtained from a language model BIBREF24. Analogously to previously described techniques, word senses are induced by clustering these vectors." ], [ "The majority of word vector models do not discriminate between multiple senses of individual words. However, a polysemous word can be identified via manual analysis of its nearest neighbours—they reflect different senses of the word. Table TABREF7 shows manually sense-labelled most similar terms to the word Ruby according to the pre-trained fastText model BIBREF8. As it was suggested early by Widdows:02, the distributional properties of a word can be used to construct a graph of words that are semantically related to it, and if a word is polysemous, such graph can easily be partitioned into a number of densely connected subgraphs corresponding to different senses of this word. Our algorithm is based on the same principle." ], [ "SenseGram is the method proposed by Pelevina:16 that separates nearest neighbours to induce word senses and constructs sense embeddings for each sense. It starts by constructing an ego-graph (semantic graph centred at a particular word) of the word and its nearest neighbours. The edges between the words denote their semantic relatedness, e.g. the two nodes are joined with an edge if cosine similarity of the corresponding embeddings is higher than a pre-defined threshold. The resulting graph can be clustered into subgraphs which correspond to senses of the word.", "The sense vectors are then constructed by averaging embeddings of words in each resulting cluster. In order to use these sense vectors for word sense disambiguation in text, the authors compute the probabilities of sense vectors of a word given its context or the similarity of the sense vectors to the context." ], [ "One of the downsides of the described above algorithm is noise in the generated graph, namely, unrelated words and wrong connections. They hamper the separation of the graph. Another weak point is the imbalance in the nearest neighbour list, when a large part of it is attributed to the most frequent sense, not sufficiently representing the other senses. This can lead to construction of incorrect sense vectors.", "We suggest a more advanced procedure of graph construction that uses the interpretability of vector addition and subtraction operations in word embedding space BIBREF6 while the previous algorithm only relies on the list of nearest neighbours in word embedding space. The key innovation of our algorithm is the use of vector subtraction to find pairs of most dissimilar graph nodes and construct the graph only from the nodes included in such “anti-edges”. Thus, our algorithm is based on graph-based word sense induction, but it also relies on vector-based operations between word embeddings to perform filtering of graph nodes. Analogously to the work of Pelevina:16, we construct a semantic relatedness graph from a list of nearest neighbours, but we filter this list using the following procedure:", "Extract a list$\\mathcal {N}$= {$w_{1}$,$w_{2}$, ...,$w_{N}$} of$N$nearest neighbours for the target (ego) word vector$w$.", "Compute a list$\\Delta $= {$\\delta _{1}$,$\\delta _{2}$, ...,$\\delta _{N}$} for each$w_{i}$in$\\mathcal {N}$, where$\\delta _{i}~=~w-w_{i}$. The vectors in$\\delta $contain the components of sense of$w$which are not related to the corresponding nearest neighbours from$\\mathcal {N}$.", "Compute a list$\\overline{\\mathcal {N}}$= {$\\overline{w_{1}}$,$\\overline{w_{2}}$, ...,$\\overline{w_{N}}$}, such that$\\overline{w_{i}}$is in the top nearest neighbours of$\\delta _{i}$in the embedding space. In other words,$\\overline{w_{i}}$is a word which is the most similar to the target (ego) word$w$and least similar to its neighbour$w_{i}$. We refer to$\\overline{w_{i}}$as an anti-pair of$w_{i}$. The set of$N$nearest neighbours and their anti-pairs form a set of anti-edges i.e. pairs of most dissimilar nodes – those which should not be connected:$\\overline{E} = \\lbrace (w_{1},\\overline{w_{1}}), (w_{2},\\overline{w_{2}}), ..., (w_{N},\\overline{w_{N}})\\rbrace $.", "To clarify this, consider the target (ego) word$w = \\textit {python}$, its top similar term$w_1 = \\textit {Java}$and the resulting anti-pair$\\overline{w_i} = \\textit {snake}$which is the top related term of$\\delta _1 = w - w_1$. Together they form an anti-edge$(w_i,\\overline{w_i})=(\\textit {Java}, \\textit {snake})$composed of a pair of semantically dissimilar terms.", "Construct$V$, the set of vertices of our semantic graph$G=(V,E)$from the list of anti-edges$\\overline{E}$, with the following recurrent procedure:$V = V \\cup \\lbrace w_{i}, \\overline{w_{i}}: w_{i} \\in \\mathcal {N}, \\overline{w_{i}} \\in \\mathcal {N}\\rbrace $, i.e. we add a word from the list of nearest neighbours and its anti-pair only if both of them are nearest neighbours of the original word$w$. We do not add$w$'s nearest neighbours if their anti-pairs do not belong to$\\mathcal {N}$. Thus, we add only words which can help discriminating between different senses of$w$.", "Construct the set of edges$E$as follows. For each$w_{i}~\\in ~\\mathcal {N}$we extract a set of its$K$nearest neighbours$\\mathcal {N}^{\\prime }_{i} = \\lbrace u_{1}, u_{2}, ..., u_{K}\\rbrace $and define$E = \\lbrace (w_{i}, u_{j}): w_{i}~\\in ~V, u_j~\\in ~V, u_{j}~\\in ~\\mathcal {N}^{\\prime }_{i}, u_{j}~\\ne ~\\overline{w_{i}}\\rbrace $. In other words, we remove edges between a word$w_{i}$and its nearest neighbour$u_j$if$u_j$is also its anti-pair. According to our hypothesis,$w_{i}$and$\\overline{w_{i}}$belong to different senses of$w$, so they should not be connected (i.e. we never add anti-edges into$E$). Therefore, we consider any connection between them as noise and remove it.", "Note that$N$(the number of nearest neighbours for the target word$w$) and$K$(the number of nearest neighbours of$w_{ci}$) do not have to match. The difference between these parameters is the following.$N$defines how many words will be considered for the construction of ego-graph. On the other hand,$K$defines the degree of relatedness between words in the ego-graph — if$K = 50$, then we will connect vertices$w$and$u$with an edge only if$u$is in the list of 50 nearest neighbours of$w$. Increasing$K$increases the graph connectivity and leads to lower granularity of senses.", "According to our hypothesis, nearest neighbours of$w$are grouped into clusters in the vector space, and each of the clusters corresponds to a sense of$w$. The described vertices selection procedure allows picking the most representative members of these clusters which are better at discriminating between the clusters. In addition to that, it helps dealing with the cases when one of the clusters is over-represented in the nearest neighbour list. In this case, many elements of such a cluster are not added to$V$because their anti-pairs fall outside the nearest neighbour list. This also improves the quality of clustering.", "After the graph construction, the clustering is performed using the Chinese Whispers algorithm BIBREF21. This is a bottom-up clustering procedure that does not require to pre-define the number of clusters, so it can correctly process polysemous words with varying numbers of senses as well as unambiguous words.", "Figure FIGREF17 shows an example of the resulting pruned graph of for the word Ruby for$N = 50$nearest neighbours in terms of the fastText cosine similarity. In contrast to the baseline method by BIBREF19 where all 50 terms are clustered, in the method presented in this section we sparsify the graph by removing 13 nodes which were not in the set of the “anti-edges” i.e. pairs of most dissimilar terms out of these 50 neighbours. Examples of anti-edges i.e. pairs of most dissimilar terms for this graph include: (Haskell, Sapphire), (Garnet, Rails), (Opal, Rubyist), (Hazel, RubyOnRails), and (Coffeescript, Opal)." ], [ "We label each word cluster representing a sense to make them and the WSD results interpretable by humans. Prior systems used hypernyms to label the clusters BIBREF25, BIBREF26, e.g. “animal” in the “python (animal)”. However, neither hypernyms nor rules for their automatic extraction are available for all 158 languages. Therefore, we use a simpler method to select a keyword which would help to interpret each cluster. For each graph node$v \\in V$we count the number of anti-edges it belongs to:$count(v) = | \\lbrace (w_i,\\overline{w_i}) : (w_i,\\overline{w_i}) \\in \\overline{E} \\wedge (v = w_i \\vee v = \\overline{w_i}) \\rbrace |$. A graph clustering yields a partition of$V$into$n$clusters:$V~=~\\lbrace V_1, V_2, ..., V_n\\rbrace $. For each cluster$V_i$we define a keyword$w^{key}_i$as the word with the largest number of anti-edges$count(\\cdot )$among words in this cluster." ], [ "We use keywords defined above to obtain vector representations of senses. In particular, we simply use word embedding of the keyword$w^{key}_i$as a sense representation$\\mathbf {s}_i$of the target word$w$to avoid explicit computation of sense embeddings like in BIBREF19. Given a sentence$\\lbrace w_1, w_2, ..., w_{j}, w, w_{j+1}, ..., w_n\\rbrace $represented as a matrix of word vectors, we define the context of the target word$w$as$\\textbf {c}_w = \\dfrac{\\sum _{j=1}^{n} w_j}{n}$. Then, we define the most appropriate sense$\\hat{s}$as the sense with the highest cosine similarity to the embedding of the word's context:" ], [ "We release a system for on-the-fly WSD for 158 languages. Given textual input, it identifies polysemous words and retrieves senses that are the most appropriate in the context." ], [ "To build word sense inventories (sense vectors) for 158 languages, we utilised GPU-accelerated routines for search of similar vectors implemented in Faiss library BIBREF27. The search of nearest neighbours takes substantial time, therefore, acceleration with GPUs helps to significantly reduce the word sense construction time. To further speed up the process, we keep all intermediate results in memory, which results in substantial RAM consumption of up to 200 Gb.", "The construction of word senses for all of the 158 languages takes a lot of computational resources and imposes high requirements to the hardware. For calculations, we use in parallel 10–20 nodes of the Zhores cluster BIBREF28 empowered with Nvidia Tesla V100 graphic cards. For each of the languages, we construct inventories based on 50, 100, and 200 neighbours for 100,000 most frequent words. The vocabulary was limited in order to make the computation time feasible. The construction of inventories for one language takes up to 10 hours, with$6.5$hours on average. Building the inventories for all languages took more than 1,000 hours of GPU-accelerated computations. We release the constructed sense inventories for all the available languages. They contain all the necessary information for using them in the proposed WSD system or in other downstream tasks." ], [ "The first text pre-processing step is language identification, for which we use the fastText language identification models by Bojanowski:17. Then the input is tokenised. For languages which use Latin, Cyrillic, Hebrew, or Greek scripts, we employ the Europarl tokeniser. For Chinese, we use the Stanford Word Segmenter BIBREF29. For Japanese, we use Mecab BIBREF30. We tokenise Vietnamese with UETsegmenter BIBREF31. All other languages are processed with the ICU tokeniser, as implemented in the PyICU project. After the tokenisation, the system analyses all the input words with pre-extracted sense inventories and defines the most appropriate sense for polysemous words.", "Figure FIGREF19 shows the interface of the system. It has a textual input form. The automatically identified language of text is shown above. A click on any of the words displays a prompt (shown in black) with the most appropriate sense of a word in the specified context and the confidence score. In the given example, the word Jaguar is correctly identified as a car brand. This system is based on the system by Ustalov:18, extending it with a back-end for multiple languages, language detection, and sense browsing capabilities." ], [ "We first evaluate our converted embedding models on multi-language lexical similarity and relatedness tasks, as a sanity check, to make sure the word sense induction process did not hurt the general performance of the embeddings. Then, we test the sense embeddings on WSD task." ], [ "We use the SemR-11 datasets BIBREF32, which contain word pairs with manually assigned similarity scores from 0 (words are not related) to 10 (words are fully interchangeable) for 12 languages: English (en), Arabic (ar), German (de), Spanish (es), Farsi (fa), French (fr), Italian (it), Dutch (nl), Portuguese (pt), Russian (ru), Swedish (sv), Chinese (zh). The task is to assign relatedness scores to these pairs so that the ranking of the pairs by this score is close to the ranking defined by the oracle score. The performance is measured with Pearson correlation of the rankings. Since one word can have several different senses in our setup, we follow Remus:18 and define the relatedness score for a pair of words as the maximum cosine similarity between any of their sense vectors.", "We extract the sense inventories from fastText embedding vectors. We set$N=K$for all our experiments, i.e. the number of vertices in the graph and the maximum number of vertices' nearest neighbours match. We conduct experiments with$N=K$set to 50, 100, and 200. For each cluster$V_i$we create a sense vector$s_i$by averaging vectors that belong to this cluster. We rely on the methodology of BIBREF33 shifting the generated sense vector to the direction of the original word vector:$s_i~=~\\lambda ~w + (1-\\lambda )~\\dfrac{1}{n}~\\sum _{u~\\in ~V_i} cos(w, u)\\cdot u, $where,$\\lambda \\in [0, 1]$,$w$is the embedding of the original word,$cos(w, u)$is the cosine similarity between$w$and$u$, and$n=|V_i|$. By introducing the linear combination of$w$and$u~\\in ~V_i$we enforce the similarity of sense vectors to the original word important for this task. In addition to that, we weight$u$by their similarity to the original word, so that more similar neighbours contribute more to the sense vector. The shifting parameter$\\lambda $is set to$0.5$, following Remus:18.", "A fastText model is able to generate a vector for each word even if it is not represented in the vocabulary, due to the use of subword information. However, our system cannot assemble sense vectors for out-of-vocabulary words, for such words it returns their original fastText vector. Still, the coverage of the benchmark datasets by our vocabulary is at least 85% and approaches 100% for some languages, so we do not have to resort to this back-off strategy very often.", "We use the original fastText vectors as a baseline. In this case, we compute the relatedness scores of the two words as a cosine similarity of their vectors." ], [ "We compute the relatedness scores for all benchmark datasets using our sense vectors and compare them to cosine similarity scores of original fastText vectors. The results vary for different languages. Figure FIGREF28 shows the change in Pearson correlation score when switching from the baseline fastText embeddings to our sense vectors. The new vectors significantly improve the relatedness detection for German, Farsi, Russian, and Chinese, whereas for Italian, Dutch, and Swedish the score slightly falls behind the baseline. For other languages, the performance of sense vectors is on par with regular fastText." ], [ "The purpose of our sense vectors is disambiguation of polysemous words. Therefore, we test the inventories constructed with egvi on the Task 13 of SemEval-2013 — Word Sense Induction BIBREF34. The task is to identify the different senses of a target word in context in a fully unsupervised manner." ], [ "The dataset consists of a set of polysemous words: 20 nouns, 20 verbs, and 10 adjectives and specifies 20 to 100 contexts per word, with the total of 4,664 contexts, drawn from the Open American National Corpus. Given a set of contexts of a polysemous word, the participants of the competition had to divide them into clusters by sense of the word. The contexts are manually labelled with WordNet senses of the target words, the gold standard clustering is generated from this labelling.", "The task allows two setups: graded WSI where participants can submit multiple senses per word and provide the probability of each sense in a particular context, and non-graded WSI where a model determines a single sense for a word in context. In our experiments we performed non-graded WSI. We considered the most suitable sense as the one with the highest cosine similarity with embeddings of the context, as described in Section SECREF9.", "The performance of WSI models is measured with three metrics that require mapping of sense inventories (Jaccard Index, Kendall's$\\tau $, and WNDCG) and two cluster comparison metrics (Fuzzy NMI and Fuzzy B-Cubed)." ], [ "We compare our model with the models that participated in the task, the baseline ego-graph clustering model by Pelevina:16, and AdaGram BIBREF17, a method that learns sense embeddings based on a Bayesian extension of the Skip-gram model. Besides that, we provide the scores of the simple baselines originally used in the task: assigning one sense to all words, assigning the most frequent sense to all words, and considering each context as expressing a different sense. The evaluation of our model was performed using the open source context-eval tool.", "Table TABREF31 shows the performance of these models on the SemEval dataset. Due to space constraints, we only report the scores of the best-performing SemEval participants, please refer to jurgens-klapaftis-2013-semeval for the full results. The performance of AdaGram and SenseGram models is reported according to Pelevina:16.", "The table shows that the performance of egvi is similar to state-of-the-art word sense disambiguation and word sense induction models. In particular, we can see that it outperforms SenseGram on the majority of metrics. We should note that this comparison is not fully rigorous, because SenseGram induces sense inventories from word2vec as opposed to fastText vectors used in our work." ], [ "In order to see how the separation of word contexts that we perform corresponds to actual senses of polysemous words, we visualise ego-graphs produced by our method. Figure FIGREF17 shows the nearest neighbours clustering for the word Ruby, which divides the graph into five senses: Ruby-related programming tools, e.g. RubyOnRails (orange cluster), female names, e.g. Josie (magenta cluster), gems, e.g. Sapphire (yellow cluster), programming languages in general, e.g. Haskell (red cluster). Besides, this is typical for fastText embeddings featuring sub-string similarity, one can observe a cluster of different spelling of the word Ruby in green.", "Analogously, the word python (see Figure FIGREF35) is divided into the senses of animals, e.g. crocodile (yellow cluster), programming languages, e.g. perl5 (magenta cluster), and conference, e.g. pycon (red cluster).", "In addition, we show a qualitative analysis of senses of mouse and apple. Table TABREF38 shows nearest neighbours of the original words separated into clusters (labels for clusters were assigned manually). These inventories demonstrate clear separation of different senses, although it can be too fine-grained. For example, the first and the second cluster for mouse both refer to computer mouse, but the first one addresses the different types of computer mice, and the second one is used in the context of mouse actions. Similarly, we see that iphone and macbook are separated into two clusters. Interestingly, fastText handles typos, code-switching, and emojis by correctly associating all non-standard variants to the word they refer, and our method is able to cluster them appropriately. Both inventories were produced with$K=200$, which ensures stronger connectivity of graph. However, we see that this setting still produces too many clusters. We computed the average numbers of clusters produced by our model with$K=200$for words from the word relatedness datasets and compared these numbers with the number of senses in WordNet for English and RuWordNet BIBREF35 for Russian (see Table TABREF37). We can see that the number of senses extracted by our method is consistently higher than the real number of senses.", "We also compute the average number of senses per word for all the languages and different values of$K$(see Figure FIGREF36). The average across languages does not change much as we increase$K$. However, for larger$K$the average exceed the median value, indicating that more languages have lower number of senses per word. At the same time, while at smaller$K$the maximum average number of senses per word does not exceed 6, larger values of$K$produce outliers, e.g. English with$12.5$senses.", "Notably, there are no languages with an average number of senses less than 2, while numbers on English and Russian WordNets are considerably lower. This confirms that our method systematically over-generates senses. The presence of outliers shows that this effect cannot be eliminated by further increasing$K$, because the$i$-th nearest neighbour of a word for$i>200$can be only remotely related to this word, even if the word is rare. Thus, our sense clustering algorithm needs a method of merging spurious senses." ], [ "We present egvi, a new algorithm for word sense induction based on graph clustering that is fully unsupervised and relies on graph operations between word vectors. We apply this algorithm to a large collection of pre-trained fastText word embeddings, releasing sense inventories for 158 languages. These inventories contain all the necessary information for constructing sense vectors and using them in downstream tasks. The sense vectors for polysemous words can be directly retrofitted with the pre-trained word embeddings and do not need any external resources. As one application of these multilingual sense inventories, we present a multilingual word sense disambiguation system that performs unsupervised and knowledge-free WSD for 158 languages without the use of any dictionary or sense-labelled corpus.", "The evaluation of quality of the produced sense inventories is performed on multilingual word similarity benchmarks, showing that our sense vectors improve the scores compared to non-disambiguated word embeddings. Therefore, our system in its present state can improve WSD and downstream tasks for languages where knowledge bases, taxonomies, and annotated corpora are not available and supervised WSD models cannot be trained.", "A promising direction for future work is combining distributional information from the induced sense inventories with lexical knowledge bases to improve WSD performance. Besides, we encourage the use of the produced word sense inventories in other downstream tasks." ], [ "We acknowledge the support of the Deutsche Forschungsgemeinschaft (DFG) foundation under the “JOIN-T 2” and “ACQuA” projects. Ekaterina Artemova was supported by the framework of the HSE University Basic Research Program and Russian Academic Excellence Project “5-100”." ] ] } { "question": [ "Is the method described in this work a clustering-based method?", "How are the different senses annotated/labeled? ", "Was any extrinsic evaluation carried out?" ], "question_id": [ "d94ac550dfdb9e4bbe04392156065c072b9d75e1", "eeb6e0caa4cf5fdd887e1930e22c816b99306473", "3c0eaa2e24c1442d988814318de5f25729696ef5" ], "nlp_background": [ "infinity", "infinity", "infinity" ], "topic_background": [ "familiar", "familiar", "familiar" ], "paper_read": [ "no", "no", "no" ], "search_query": [ "", "", "" ], "question_writer": [ "f7c76ad7ff9c8b54e8c397850358fa59258c6672", "f7c76ad7ff9c8b54e8c397850358fa59258c6672", "f7c76ad7ff9c8b54e8c397850358fa59258c6672" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": true, "free_form_answer": "", "evidence": [ "The task of Word Sense Induction (WSI) can be seen as an unsupervised version of WSD. WSI aims at clustering word senses and does not require to map each cluster to a predefined sense. Instead of that, word sense inventories are induced automatically from the clusters, treating each cluster as a single sense of a word. WSI approaches fall into three main groups: context clustering, word ego-network clustering and synonyms (or substitute) clustering.", "We suggest a more advanced procedure of graph construction that uses the interpretability of vector addition and subtraction operations in word embedding space BIBREF6 while the previous algorithm only relies on the list of nearest neighbours in word embedding space. The key innovation of our algorithm is the use of vector subtraction to find pairs of most dissimilar graph nodes and construct the graph only from the nodes included in such “anti-edges”. Thus, our algorithm is based on graph-based word sense induction, but it also relies on vector-based operations between word embeddings to perform filtering of graph nodes. Analogously to the work of Pelevina:16, we construct a semantic relatedness graph from a list of nearest neighbours, but we filter this list using the following procedure:" ], "highlighted_evidence": [ "The task of Word Sense Induction (WSI) can be seen as an unsupervised version of WSD. WSI aims at clustering word senses and does not require to map each cluster to a predefined sense. Instead of that, word sense inventories are induced automatically from the clusters, treating each cluster as a single sense of a word.", "We suggest a more advanced procedure of graph construction that uses the interpretability of vector addition and subtraction operations in word embedding space BIBREF6 while the previous algorithm only relies on the list of nearest neighbours in word embedding space. The key innovation of our algorithm is the use of vector subtraction to find pairs of most dissimilar graph nodes and construct the graph only from the nodes included in such “anti-edges”." ] }, { "unanswerable": false, "extractive_spans": [], "yes_no": true, "free_form_answer": "", "evidence": [ "We suggest a more advanced procedure of graph construction that uses the interpretability of vector addition and subtraction operations in word embedding space BIBREF6 while the previous algorithm only relies on the list of nearest neighbours in word embedding space. The key innovation of our algorithm is the use of vector subtraction to find pairs of most dissimilar graph nodes and construct the graph only from the nodes included in such “anti-edges”. Thus, our algorithm is based on graph-based word sense induction, but it also relies on vector-based operations between word embeddings to perform filtering of graph nodes. Analogously to the work of Pelevina:16, we construct a semantic relatedness graph from a list of nearest neighbours, but we filter this list using the following procedure:" ], "highlighted_evidence": [ "We suggest a more advanced procedure of graph construction that uses the interpretability of vector addition and subtraction operations in word embedding space BIBREF6 while the previous algorithm only relies on the list of nearest neighbours in word embedding space." ] } ], "annotation_id": [ "1e94724114314e98f1b554b9e902e5b72f23a5f1", "77e36cc311b73a573d3fd8b9f3283f9483c9d2b4" ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "The contexts are manually labelled with WordNet senses of the target words" ], "yes_no": null, "free_form_answer": "", "evidence": [ "The dataset consists of a set of polysemous words: 20 nouns, 20 verbs, and 10 adjectives and specifies 20 to 100 contexts per word, with the total of 4,664 contexts, drawn from the Open American National Corpus. Given a set of contexts of a polysemous word, the participants of the competition had to divide them into clusters by sense of the word. The contexts are manually labelled with WordNet senses of the target words, the gold standard clustering is generated from this labelling.", "The task allows two setups: graded WSI where participants can submit multiple senses per word and provide the probability of each sense in a particular context, and non-graded WSI where a model determines a single sense for a word in context. In our experiments we performed non-graded WSI. We considered the most suitable sense as the one with the highest cosine similarity with embeddings of the context, as described in Section SECREF9." ], "highlighted_evidence": [ "The dataset consists of a set of polysemous words: 20 nouns, 20 verbs, and 10 adjectives and specifies 20 to 100 contexts per word, with the total of 4,664 contexts, drawn from the Open American National Corpus. Given a set of contexts of a polysemous word, the participants of the competition had to divide them into clusters by sense of the word. The contexts are manually labelled with WordNet senses of the target words, the gold standard clustering is generated from this labelling.\n\nThe task allows two setups: graded WSI where participants can submit multiple senses per word and provide the probability of each sense in a particular context, and non-graded WSI where a model determines a single sense for a word in context. In our experiments we performed non-graded WSI. We considered the most suitable sense as the one with the highest cosine similarity with embeddings of the context, as described in Section SECREF9." ] } ], "annotation_id": [ "f54fb34f7c718c0916d43f15c8e0d20a590d0bad" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": true, "free_form_answer": "", "evidence": [ "We first evaluate our converted embedding models on multi-language lexical similarity and relatedness tasks, as a sanity check, to make sure the word sense induction process did not hurt the general performance of the embeddings. Then, we test the sense embeddings on WSD task." ], "highlighted_evidence": [ "We first evaluate our converted embedding models on multi-language lexical similarity and relatedness tasks, as a sanity check, to make sure the word sense induction process did not hurt the general performance of the embeddings. Then, we test the sense embeddings on WSD task." ] } ], "annotation_id": [ "1876981b02466f860fe36cc2c964b52887b55a4c" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ] } 1910.04269 Spoken Language Identification using ConvNets Language Identification (LI) is an important first step in several speech processing systems. With a growing number of voice-based assistants, speech LI has emerged as a widely researched field. To approach the problem of identifying languages, we can either adopt an implicit approach where only the speech for a language is present or an explicit one where text is available with its corresponding transcript. This paper focuses on an implicit approach due to the absence of transcriptive data. This paper benchmarks existing models and proposes a new attention based model for language identification which uses log-Mel spectrogram images as input. We also present the effectiveness of raw waveforms as features to neural network models for LI tasks. For training and evaluation of models, we classified six languages (English, French, German, Spanish, Russian and Italian) with an accuracy of 95.4% and four languages (English, French, German, Spanish) with an accuracy of 96.3% obtained from the VoxForge dataset. This approach can further be scaled to incorporate more languages. { "section_name": [ "Introduction", "Related Work", "Proposed Method ::: Motivations", "Proposed Method ::: Description of Features", "Proposed Method ::: Model Description", "Proposed Method ::: Model Details: 1D ConvNet", "Proposed Method ::: Model Details: 1D ConvNet ::: Hyperparameter Optimization:", "Proposed Method ::: Model Details: 2D ConvNet with Attention and bi-directional GRU", "Proposed Method ::: Model Details: 2D ConvNet with Attention and bi-directional GRU ::: ", "Proposed Method ::: Model Details: 2D ConvNet with Attention and bi-directional GRU ::: Hyperparameter Optimization:", "Proposed Method ::: Model details: 2D-ConvNet", "Proposed Method ::: Dataset", "Results and Discussion", "Results and Discussion ::: Misclassification", "Results and Discussion ::: Future Scope", "Conclusion" ], "paragraphs": [ [ "Language Identification (LI) is a problem which involves classifying the language being spoken by a speaker. LI systems can be used in call centers to route international calls to an operator who is fluent in that identified language BIBREF0. In speech-based assistants, LI acts as the first step which chooses the corresponding grammar from a list of available languages for its further semantic analysis BIBREF1. It can also be used in multi-lingual voice-controlled information retrieval systems, for example, Apple Siri and Amazon Alexa.", "Over the years, studies have utilized many prosodic and acoustic features to construct machine learning models for LI systems BIBREF2. Every language is composed of phonemes, which are distinct unit of sounds in that language, such as b of black and g of green. Several prosodic and acoustic features are based on phonemes, which become the underlying features on whom the performance of the statistical model depends BIBREF3, BIBREF4. If two languages have many overlapping phonemes, then identifying them becomes a challenging task for a classifier. For example, the word cat in English, kat in Dutch, katze in German have different consonants but when used in a speech they all would sound quite similar.", "Due to such drawbacks several studies have switched over to using Deep Neural Networks (DNNs) to harness their novel auto-extraction techniques BIBREF1, BIBREF5. This work follows an implicit approach for identifying six languages with overlapping phonemes on the VoxForge BIBREF6 dataset and achieves 95.4% overall accuracy.", "In previous studies BIBREF1, BIBREF7, BIBREF5, authors use log-Mel spectrum of a raw audio as inputs to their models. One of our contributions is to enhance the performance of this approach by utilising recent techniques like Mixup augmentation of inputs and exploring the effectiveness of Attention mechanism in enhancing performance of neural network. As log-Mel spectrum needs to be computed for each raw audio input and processing time for generating log-Mel spectrum increases linearly with length of audio, this acts as a bottleneck for these models. Hence, we propose the use of raw audio waveforms as inputs to deep neural network which boosts performance by avoiding additional overhead of computing log-Mel spectrum for each audio. Our 1D-ConvNet architecture auto-extracts and classifies features from this raw audio input.", "The structure of the work is as follows. In Section 2 we discuss about the previous related studies in this field. The model architecture for both the raw waveforms and log-Mel spectrogram images is discussed in Section 3 along with the a discussion on hyperparameter space exploration. In Section 4 we present the experimental results. Finally, in Section 5 we discuss the conclusions drawn from the experiment and future work." ], [ "Extraction of language dependent features like prosody and phonemes was a popular approach to classify spoken languages BIBREF8, BIBREF9, BIBREF10. Following their success in speaker verification systems, i-vectors have also been used as features in various classification networks. These approaches required significant domain knowledge BIBREF11, BIBREF9. Nowadays most of the attempts on spoken language identification rely on neural networks for meaningful feature extraction and classification BIBREF12, BIBREF13.", "Revay et al. BIBREF5 used the ResNet50 BIBREF14 architecture for classifying languages by generating the log-Mel spectra of each raw audio. The model uses a cyclic learning rate where learning rate increases and then decreases linearly. Maximum learning rate for a cycle is set by finding the optimal learning rate using fastai BIBREF15 library. The model classified six languages – English, French, Spanish, Russian, Italian and German – and achieving an accuracy of 89.0%.", "Gazeau et al. BIBREF16 in his research showed how Neural Networks, Support Vector Machine and Hidden Markov Model (HMM) can be used to identify French, English, Spanish and German. Dataset was prepared using voice samples from Youtube News BIBREF17and VoxForge BIBREF6 datasets. Hidden Markov models convert speech into a sequence of vectors, was used to capture temporal features in speech. HMMs trained on VoxForge BIBREF6 dataset performed best in comparison to other models proposed by him on same VoxForge dataset. They reported an accuracy of 70.0%.", "Bartz et al. BIBREF1 proposed two different hybrid Convolutional Recurrent Neural Networks for language identification. They proposed a new architecture for extracting spatial features from log-Mel spectra of raw audio using CNNs and then using RNNs for capturing temporal features to identify the language. This model achieved an accuracy of 91.0% on Youtube News Dataset BIBREF17. In their second architecture they used the Inception-v3 BIBREF18 architecture to extract spatial features which were then used as input for bi-directional LSTMs to predict the language accurately. This model achieved an accuracy of 96.0% on four languages which were English, German, French and Spanish. They also trained their CNN model (obtained after removing RNN from CRNN model) and the Inception-v3 on their dataset. However they were not able to achieve better results achieving and reported 90% and 95% accuracies, respectively.", "Kumar et al. BIBREF0 used Mel-frequency cepstral coefficients (MFCC), Perceptual linear prediction coefficients (PLP), Bark Frequency Cepstral Coefficients (BFCC) and Revised Perceptual Linear Prediction Coefficients (RPLP) as features for language identification. BFCC and RPLP are hybrid features derived using MFCC and PLP. They used two different models based on Vector Quantization (VQ) with Dynamic Time Warping (DTW) and Gaussian Mixture Model (GMM) for classification. These classification models were trained with different features. The authors were able to show that these models worked better with hybrid features (BFCC and RPLP) as compared to conventional features (MFCC and PLP). GMM combined with RPLP features gave the most promising results and achieved an accuracy of 88.8% on ten languages. They designed their own dataset comprising of ten languages being Dutch, English, French, German, Italian, Russian, Spanish, Hindi, Telegu, and Bengali.", "Montavon BIBREF7 generated Mel spectrogram as features for a time-delay neural network (TDNN). This network had two-dimensional convolutional layers for feature extraction. An elaborate analysis of how deep architectures outperform their shallow counterparts is presented in this reseacrch. The difficulties in classifying perceptually similar languages like German and English were also put forward in this work. It is mentioned that the proposed approach is less robust to new speakers present in the test dataset. This method was able to achieve an accuracy of 91.2% on dataset comprising of 3 languages – English, French and German.", "In Table TABREF1, we summarize the quantitative results of the above previous studies. It includes the model basis, feature description, languages classified and the used dataset along with accuracy obtained. The table also lists the overall results of our proposed models (at the top). The languages used by various authors along with their acronyms are English (En), Spanish (Es), French (Fr), German (De), Russian (Ru), Italian (It), Bengali (Ben), Hindi (Hi) and Telegu (Tel)." ], [ "Several state-of-the-art results on various audio classification tasks have been obtained by using log-Mel spectrograms of raw audio, as features BIBREF19. Convolutional Neural Networks have demonstrated an excellent performance gain in classification of these features BIBREF20, BIBREF21 against other machine learning techniques. It has been shown that using attention layers with ConvNets further enhanced their performance BIBREF22. This motivated us to develop a CNN-based architecture with attention since this approach hasn’t been applied to the task of language identification before.", "Recently, using raw audio waveform as features to neural networks has become a popular approach in audio classification BIBREF23, BIBREF22. Raw waveforms have several artifacts which are not effectively captured by various conventional feature extraction techniques like Mel Frequency Cepstral Coefficients (MFCC), Constant Q Transform (CQT), Fast Fourier Transform (FFT), etc.", "Audio files are a sequence of spoken words, hence they have temporal features too.A CNN is better at capturing spatial features only and RNNs are better at capturing temporal features as demonstrated by Bartz et al. BIBREF1 using audio files. Therefore, we combined both of these to make a CRNN model.", "We propose three types of models to tackle the problem with different approaches, discussed as follows." ], [ "As an average human's voice is around 300 Hz and according to Nyquist-Shannon sampling theorem all the useful frequencies (0-300 Hz) are preserved with sampling at 8 kHz, therefore, we sampled raw audio files from all six languages at 8 kHz", "The average length of audio files in this dataset was about 10.4 seconds and standard deviation was 2.3 seconds. For our experiments, the audio length was set to 10 seconds. If the audio files were shorter than 10 second, then the data was repeated and concatenated. If audio files were longer, then the data was truncated." ], [ "We applied the following design principles to all our models:", "Every convolutional layer is always followed by an appropriate max pooling layer. This helps in containing the explosion of parameters and keeps the model small and nimble.", "Convolutional blocks are defined as an individual block with multiple pairs of one convolutional layer and one max pooling layer. Each convolutional block is preceded or succeded by a convolutional layer.", "Batch Normalization and Rectified linear unit activations were applied after each convolutional layer. Batch Normalization helps speed up convergence during training of a neural network.", "Model ends with a dense layer which acts the final output layer." ], [ "As the sampling rate is 8 kHz and audio length is 10 s, hence the input is raw audio to the models with input size of (batch size, 1, 80000). In Table TABREF10, we present a detailed layer-by-layer illustration of the model along with its hyperparameter.", "-10pt" ], [ "Tuning hyperparameters is a cumbersome process as the hyperparamter space expands exponentially with the number of parameters, therefore efficient exploration is needed for any feasible study. We used the random search algorithm supported by Hyperopt BIBREF24 library to randomly search for an optimal set of hyperparameters from a given parameter space. In Fig. FIGREF12, various hyperparameters we considered are plotted against the validation accuracy as violin plots. Our observations for each hyperparameter are summarized below:", "Number of filters in first layer: We observe that having 128 filters gives better results as compared to other filter values of 32 and 64 in the first layer. A higher number of filters in the first layer of network is able to preserve most of the characteristics of input.", "Kernel Size: We varied the receptive fields of convolutional layers by choosing the kernel size from among the set of {3, 5, 7, 9}. We observe that a kernel size of 9 gives better accuracy at the cost of increased computation time and larger number of parameters. A large kernel size is able to capture longer patterns in its input due to bigger receptive power which results in an improved accuracy.", "Dropout: Dropout randomly turns-off (sets to 0) various individual nodes during training of the network. In a deep CNN it is important that nodes do not develop a co-dependency amongst each other during training in order to prevent overfitting on training data BIBREF25. Dropout rate of$0.1$works well for our model. When using a higher dropout rate the network is not able to capture the patterns in training dataset.", "Batch Size: We chose batch sizes from amongst the set {32, 64, 128}. There is more noise while calculating error in a smaller batch size as compared to a larger one. This tends to have a regularizing effect during training of the network and hence gives better results. Thus, batch size of 32 works best for the model.", "Layers in Convolutional block 1 and 2: We varied the number of layers in both the convolutional blocks. If the number of layers is low, then the network does not have enough depth to capture patterns in the data whereas having large number of layers leads to overfitting on the data. In our network, two layers in the first block and one layer in the second block give optimal results." ], [ "Log-Mel spectrogram is the most commonly used method for converting audio into the image domain. The audio data was again sampled at 8 kHz. The input to this model was the log-Mel spectra. We generated log-Mel spectrogram using the LibROSA BIBREF26 library. In Table TABREF16, we present a detailed layer-by-layer illustration of the model along with its hyperparameter." ], [ "We took some specific design choices for this model, which are as follows:", "We added residual connections with each convolutional layer. Residual connections in a way makes the model selective of the contributing layers, determines the optimal number of layers required for training and solves the problem of vanishing gradients. Residual connections or skip connections skip training of those layers that do not contribute much in the overall outcome of model.", "We added spatial attention BIBREF27 networks to help the model in focusing on specific regions or areas in an image. Spatial attention aids learning irrespective of transformations, scaling and rotation done on the input images making the model more robust and helping it to achieve better results.", "We added Channel Attention networks so as to help the model to find interdependencies among color channels of log-Mel spectra. It adaptively assigns importance to each color channel in a deep convolutional multi-channel network. In our model we apply channel and spatial attention just before feeding the input into bi-directional GRU. This helps the model to focus on selected regions and at the same time find patterns among channels to better determine the language." ], [ "We used the random search algorithm supported by Hyperopt BIBREF24 library to randomly search for an optimal set of hyperparameters from a given parameter space. In Fig. FIGREF19 ,various hyperparameters we tuned are plotted against the validation accuracy. Our observations for each hyperparameter are summarized below:", "Filter Size: 64 filters in the first layer of network can preserve most of the characteristics of input, but increasing it to 128 is inefficient as overfitting occurs.", "Kernel Size: There is a trade-off between kernel size and capturing complex non-linear features. Using a small kernel size will require more layers to capture features whereas using a large kernel size will require less layers. Large kernels capture simple non-linear features whereas using a smaller kernel will help us capture more complex non-linear features. However, with more layers, backpropagation necessitates the need for a large memory. We experimented with large kernel size and gradually increased the layers in order to capture more complex features. The results are not conclusive and thus we chose kernel size of 7 against 3.", "Dropout: Dropout rate of 0.1 works well for our data. When using a higher dropout rate the network is not able to capture the patterns in training dataset.", "Batch Size: There is always a trade-off between batch size and getting accurate gradients. Using a large batch size helps the model to get more accurate gradients since the model tries to optimize gradients over a large set of images. We found that using a batch size of 128 helped the model to train faster and get better results than using a batch size less than 128.", "Number of hidden units in bi-directional GRU: Varying the number of hidden units and layers in GRU helps the model to capture temporal features which can play a significant role in identifying the language correctly. The optimal number of hidden units and layers depends on the complexity of the dataset. Using less number of hidden units may capture less features whereas using large number of hidden units may be computationally expensive. In our case we found that using 1536 hidden units in a single bi-directional GRU layer leads to the best result.", "Image Size: We experimented with log-Mel spectra images of sizes$64 \\times 64$and$128 \\times 128$pixels and found that our model worked best with images of size of$128 \\times 128$pixels.", "We also evaluated our model on data with mixup augmentation BIBREF28. It is a data augmentation technique that also acts as a regularization technique and prevents overfitting. Instead of directly taking images from the training dataset as input, mixup takes a linear combination of any two random images and feeds it as input. The following equations were used to prepared a mixed-up dataset:", "and", "where$\\alpha \\in [0, 1]$is a random variable from a$\\beta $-distribution,$I_1." ], [ "This model is a similar model to 2D-ConvNet with Attention and bi-directional GRU described in section SECREF13 except that it lacks skip connections, attention layers, bi-directional GRU and the embedding layer incorporated in the previous model." ], [ "We classified six languages (English, French, German, Spanish, Russian and Italian) from the VoxForge BIBREF6 dataset. VoxForge is an open-source speech corpus which primarily consists of samples recorded and submitted by users using their own microphone. This results in significant variation of speech quality between samples making it more representative of real world scenarios.", "Our dataset consists of 1,500 samples for each of six languages. Out of 1,500 samples for each language, 1,200 were randomly selected as training dataset for that language and rest 300 as validation dataset using k-fold cross-validation. To sum up, we trained our model on 7,200 samples and validated it on 1800 samples comprising six languages. The results are discussed in next section." ], [ "This paper discusses two end-to-end approaches which achieve state-of-the-art results in both the image as well as audio domain on the VoxForge dataset BIBREF6. In Table TABREF25, we present all the classification accuracies of the two models of the cases with and without mixup for six and four languages.", "In the audio domain (using raw audio waveform as input), 1D-ConvNet achieved a mean accuracy of 93.7% with a standard deviation of 0.3% on running k-fold cross validation. In Fig FIGREF27 (a) we present the confusion matrix for the 1D-ConvNet model.", "In the image domain (obtained by taking log-Mel spectra of raw audio), 2D-ConvNet with 2D attention (channel and spatial attention) and bi-directional GRU achieved a mean accuracy of 95.0% with a standard deviation of 1.2% for six languages. This model performed better when mixup regularization was applied. 2D-ConvNet achieved a mean accuracy of 95.4% with standard deviation of 0.6% on running k-fold cross validation for six languages when mixup was applied. In Fig FIGREF27 (b) we present the confusion matrix for the 2D-ConvNet model. 2D attention models focused on the important features extracted by convolutional layers and bi-directional GRU captured the temporal features." ], [ "Several of the spoken languages in Europe belong to the Indo-European family. Within this family, the languages are divided into three phyla which are Romance, Germanic and Slavic. Of the 6 languages that we selected Spanish (Es), French (Fr) and Italian (It) belong to the Romance phyla, English and German belong to Germanic phyla and Russian in Slavic phyla. Our model also confuses between languages belonging to the similar phyla which acts as an insanity check since languages in same phyla have many similar pronounced words such as cat in English becomes Katze in German and Ciao in Italian becomes Chao in Spanish.", "Our model confuses between French (Fr) and Russian (Ru) while these languages belong to different phyla, many words from French were adopted into Russian such as automate (oot-oo-mate) in French becomes ABTOMaT (aff-taa-maat) in Russian which have similar pronunciation.", "" ], [ "The performance of raw audio waveforms as input features to ConvNet can be further improved by applying silence removal in the audio. Also, there is scope for improvement by augmenting available data through various conventional techniques like pitch shifting, adding random noise and changing speed of audio. These help in making neural networks more robust to variations which might be present in real world scenarios. There can be further exploration of various feature extraction techniques like Constant-Q transform and Fast Fourier Transform and assessment of their impact on Language Identification.", "There can be further improvements in neural network architectures like concatenating the high level features obtained from 1D-ConvNet and 2D-ConvNet, before performing classification. There can be experiments using deeper networks with skip connections and Inception modules. These are known to have positively impacted the performance of Convolutional Neural Networks." ], [ "There are two main contributions of this paper in the domain of spoken language identification. Firstly, we presented an extensive analysis of raw audio waveforms as input features to 1D-ConvNet. We experimented with various hyperparameters in our 1D-ConvNet and evaluated their effect on validation accuracy. This method is able to bypass the computational overhead of conventional approaches which depend on generation of spectrograms as a necessary pre-procesing step. We were able to achieve an accauracy of 93.7% using this technique.", "Next, we discussed the enhancement in performance of 2D-ConvNet using mixup augmentation, which is a recently developed technique to prevent overﬁtting on test data.This approach achieved an accuracy of 95.4%. We also analysed how attention mechanism and recurrent layers impact the performance of networks. This approach achieved an accuracy of 95.0%." ] ] } { "question": [ "Does the model use both spectrogram images and raw waveforms as features?", "Is the performance compared against a baseline model?", "What is the accuracy reported by state-of-the-art methods?" ], "question_id": [ "dc1fe3359faa2d7daa891c1df33df85558bc461b", "922f1b740f8b13fdc8371e2a275269a44c86195e", "b39f2249a1489a2cef74155496511cc5d1b2a73d" ], "nlp_background": [ "two", "two", "two" ], "topic_background": [ "familiar", "familiar", "familiar" ], "paper_read": [ "no", "no", "no" ], "search_query": [ "language identification", "language identification", "language identification" ], "question_writer": [ "486a870694ba60f1a1e7e4ec13e328164cd4b43c", "486a870694ba60f1a1e7e4ec13e328164cd4b43c", "486a870694ba60f1a1e7e4ec13e328164cd4b43c" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": false, "free_form_answer": "", "evidence": [ "FLOAT SELECTED: Table 4: Results of the two models and all its variations" ], "highlighted_evidence": [ "FLOAT SELECTED: Table 4: Results of the two models and all its variations" ] } ], "annotation_id": [ "32dee5de8cb44c67deef309c16e14e0634a7a95e" ], "worker_id": [ "18f4d5a2eb93a969d55361267e74aa0c4f6f82fe" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": true, "free_form_answer": "", "evidence": [ "In Table TABREF1, we summarize the quantitative results of the above previous studies. It includes the model basis, feature description, languages classified and the used dataset along with accuracy obtained. The table also lists the overall results of our proposed models (at the top). The languages used by various authors along with their acronyms are English (En), Spanish (Es), French (Fr), German (De), Russian (Ru), Italian (It), Bengali (Ben), Hindi (Hi) and Telegu (Tel)." ], "highlighted_evidence": [ "In Table TABREF1, we summarize the quantitative results of the above previous studies. It includes the model basis, feature description, languages classified and the used dataset along with accuracy obtained. The table also lists the overall results of our proposed models (at the top). The languages used by various authors along with their acronyms are English (En), Spanish (Es), French (Fr), German (De), Russian (Ru), Italian (It), Bengali (Ben), Hindi (Hi) and Telegu (Tel)." ] }, { "unanswerable": false, "extractive_spans": [], "yes_no": false, "free_form_answer": "", "evidence": [], "highlighted_evidence": [] } ], "annotation_id": [ "1a51115249ab15633d834cd3ea7d986f6cc8d7c1", "55b711611cb5f52eab6c38051fb155c5c37234ff" ], "worker_id": [ "a0b403873302db7cada39008f04d01155ef68f4f", "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "Answer with content missing: (Table 1)\nPrevious state-of-the art on same dataset: ResNet50 89% (6 languages), SVM-HMM 70% (4 languages)", "evidence": [ "In Table TABREF1, we summarize the quantitative results of the above previous studies. It includes the model basis, feature description, languages classified and the used dataset along with accuracy obtained. The table also lists the overall results of our proposed models (at the top). The languages used by various authors along with their acronyms are English (En), Spanish (Es), French (Fr), German (De), Russian (Ru), Italian (It), Bengali (Ben), Hindi (Hi) and Telegu (Tel)." ], "highlighted_evidence": [ "In Table TABREF1, we summarize the quantitative results of the above previous studies.", "In Table TABREF1, we summarize the quantitative results of the above previous studies." ] } ], "annotation_id": [ "2405966a3c4bcf65f3b59888f345e2b0cc5ef7b0" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ] } 1906.00378 Unsupervised Bilingual Lexicon Induction from Mono-lingual Multimodal Data Bilingual lexicon induction, translating words from the source language to the target language, is a long-standing natural language processing task. Recent endeavors prove that it is promising to employ images as pivot to learn the lexicon induction without reliance on parallel corpora. However, these vision-based approaches simply associate words with entire images, which are constrained to translate concrete words and require object-centered images. We humans can understand words better when they are within a sentence with context. Therefore, in this paper, we propose to utilize images and their associated captions to address the limitations of previous approaches. We propose a multi-lingual caption model trained with different mono-lingual multimodal data to map words in different languages into joint spaces. Two types of word representation are induced from the multi-lingual caption model: linguistic features and localized visual features. The linguistic feature is learned from the sentence contexts with visual semantic constraints, which is beneficial to learn translation for words that are less visual-relevant. The localized visual feature is attended to the region in the image that correlates to the word, so that it alleviates the image restriction for salient visual representation. The two types of features are complementary for word translation. Experimental results on multiple language pairs demonstrate the effectiveness of our proposed method, which substantially outperforms previous vision-based approaches without using any parallel sentences or supervision of seed word pairs. { "section_name": [ "Introduction", "Related Work", "Unsupervised Bilingual Lexicon Induction", "Multi-lingual Image Caption Model", "Visual-guided Word Representation", "Word Translation Prediction", "Datasets", "Experimental Setup", "Evaluation of Multi-lingual Image Caption", "Evaluation of Bilingual Lexicon Induction", "Generalization to Diverse Language Pairs", "Conclusion", " Acknowledgments" ], "paragraphs": [ [ "The bilingual lexicon induction task aims to automatically build word translation dictionaries across different languages, which is beneficial for various natural language processing tasks such as cross-lingual information retrieval BIBREF0 , multi-lingual sentiment analysis BIBREF1 , machine translation BIBREF2 and so on. Although building bilingual lexicon has achieved success with parallel sentences in resource-rich languages BIBREF2 , the parallel data is insufficient or even unavailable especially for resource-scarce languages and it is expensive to collect. On the contrary, there are abundant multimodal mono-lingual data on the Internet, such as images and their associated tags and descriptions, which motivates researchers to induce bilingual lexicon from these non-parallel data without supervision.", "There are mainly two types of mono-lingual approaches to build bilingual dictionaries in recent works. The first is purely text-based, which explores the structure similarity between different linguistic space. The most popular approach among them is to linearly map source word embedding into the target word embedding space BIBREF3 , BIBREF4 . The second type utilizes vision as bridge to connect different languages BIBREF5 , BIBREF6 , BIBREF7 . It assumes that words correlating to similar images should share similar semantic meanings. So previous vision-based methods search images with multi-lingual words and translate words according to similarities of visual features extracted from the corresponding images. It has been proved that the visual-grounded word representation improves the semantic quality of the words BIBREF8 .", "However, previous vision-based methods suffer from two limitations for bilingual lexicon induction. Firstly, the accurate translation performance is confined to concrete visual-relevant words such as nouns and adjectives as shown in Figure SECREF2 . For words without high-quality visual groundings, previous methods would generate poor translations BIBREF7 . Secondly, previous works extract visual features from the whole image to represent words and thus require object-centered images in order to obtain reliable visual groundings. However, common images usually contain multiple objects or scenes, and the word might only be grounded to part of the image, therefore the global visual features will be quite noisy to represent the word.", "In this paper, we address the two limitations via learning from mono-lingual multimodal data with both sentence and visual context (e.g., image and caption data) to induce bilingual lexicon. Such multimodal data is also easily obtained for different languages on the Internet BIBREF9 . We propose a multi-lingual image caption model trained on multiple mono-lingual image caption data, which is able to induce two types of word representations for different languages in the joint space. The first is the linguistic feature learned from the sentence context with visual semantic constraints, so that it is able to generate more accurate translations for words that are less visual-relevant. The second is the localized visual feature which attends to the local region of the object or scene in the image for the corresponding word, so that the visual representation of words will be more salient than previous global visual features. The two representations are complementary and can be combined to induce better bilingual word translation.", "We carry out experiments on multiple language pairs including German-English, French-English, and Japanese-English. The experimental results show that the proposed multi-lingual caption model not only achieves better caption performance than independent mono-lingual models for data-scarce languages, but also can induce the two types of features, linguistic and visual features, for different languages in joint spaces. Our proposed method consistently outperforms previous state-of-the-art vision-based bilingual word induction approaches on different languages. The contributions of this paper are as follows:" ], [ "The early works for bilingual lexicon induction require parallel data in different languages. BIBREF2 systematically investigates various word alignment methods with parallel texts to induce bilingual lexicon. However, the parallel data is scarce or even unavailable for low-resource languages. Therefore, methods with less dependency on the availability of parallel corpora are highly desired.", "There are mainly two types of mono-lingual approaches for bilingual lexicon induction: text-based and vision-based methods. The text-based methods purely exploit the linguistic information to translate words. The initiative works BIBREF10 , BIBREF11 utilize word co-occurrences in different languages as clue for word alignment. With the improvement in word representation based on deep learning, BIBREF3 finds the structure similarity of the deep-learned word embeddings in different languages, and employs a parallel vocabulary to learn a linear mapping from the source to target word embeddings. BIBREF12 improves the translation performance via adding an orthogonality constraint to the mapping. BIBREF13 further introduces a matching mechanism to induce bilingual lexicon with fewer seeds. However, these models require seed lexicon as the start-point to train the bilingual mapping. Recently, BIBREF4 proposes an adversarial learning approach to learn the joint bilingual embedding space without any seed lexicon.", "The vision-based methods exploit images to connect different languages, which assume that words corresponding to similar images are semantically alike. BIBREF5 collects images with labeled words in different languages to learn word translation with image as pivot. BIBREF6 improves the visual-based word translation performance via using more powerful visual representations: the CNN-based BIBREF14 features. The above works mainly focus on the translation of nouns and are limited in the number of collected languages. The recent work BIBREF7 constructs the current largest (with respect to the number of language pairs and types of part-of-speech) multimodal word translation dataset, MMID. They show that concrete words are easiest for vision-based translation methods while others are much less accurate. In our work, we alleviate the limitations of previous vision-based methods via exploring images and their captions rather than images with unstructured tags to connect different languages.", "Image captioning has received more and more research attentions. Most image caption works focus on the English caption generation BIBREF15 , BIBREF16 , while there are limited works considering generating multi-lingual captions. The recent WMT workshop BIBREF17 has proposed a subtask of multi-lingual caption generation, where different strategies such as multi-task captioning and source-to-target translation followed by captioning have been proposed to generate captions in target languages. Our work proposes a multi-lingual image caption model that shares part of the parameters across different languages in order to benefit each other." ], [ "Our goal is to induce bilingual lexicon without supervision of parallel sentences or seed word pairs, purely based on the mono-lingual image caption data. In the following, we introduce the multi-lingual image caption model whose objectives for bilingual lexicon induction are two folds: 1) explicitly build multi-lingual word embeddings in the joint linguistic space; 2) implicitly extract the localized visual features for each word in the shared visual space. The former encodes linguistic information of words while the latter encodes the visual-grounded information, which are complementary for bilingual lexicon induction." ], [ "Suppose we have mono-lingual image caption datasets INLINEFORM0 in the source language and INLINEFORM1 in the target language. The images INLINEFORM2 in INLINEFORM3 and INLINEFORM4 do not necessarily overlap, but cover overlapped object or scene classes which is the basic assumption of vision-based methods. For notation simplicity, we omit the superscript INLINEFORM5 for the data sample. Each image caption INLINEFORM6 and INLINEFORM7 is composed of word sequences INLINEFORM8 and INLINEFORM9 respectively, where INLINEFORM10 is the sentence length.", "The proposed multi-lingual image caption model aims to generate sentences in different languages to describe the image content, which connects the vision and multi-lingual sentences. Figure FIGREF15 illustrates the framework of the caption model, which consists of three parts: the image encoder, word embedding module and language decoder.", "The image encoder encodes the image into the shared visual space. We apply the Resnet152 BIBREF18 as our encoder INLINEFORM0 , which produces INLINEFORM1 vectors corresponding to different spatial locations in the image: DISPLAYFORM0 ", "where INLINEFORM0 . The parameter INLINEFORM1 of the encoder is shared for different languages in order to encode all the images in the same visual space.", "The word embedding module maps the one-hot word representation in each language into low-dimensional distributional embeddings: DISPLAYFORM0 ", "where INLINEFORM0 and INLINEFORM1 is the word embedding matrix for the source and target languages respectively. INLINEFORM2 and INLINEFORM3 are the vocabulary size of the two languages.", "The decoder then generates word step by step conditioning on the encoded image feature and previous generated words. The probability of generating INLINEFORM0 in the source language is as follows: DISPLAYFORM0 ", "where INLINEFORM0 is the hidden state of the decoder at step INLINEFORM1 , which is functioned by LSTM BIBREF19 : DISPLAYFORM0 ", "The INLINEFORM0 is the dynamically located contextual image feature to generate word INLINEFORM1 via attention mechanism, which is the weighted sum of INLINEFORM2 computed by DISPLAYFORM0 DISPLAYFORM1 ", "where INLINEFORM0 is a fully connected neural network. The parameter INLINEFORM1 in the decoder includes all the weights in the LSTM and the attention network INLINEFORM2 .", "Similarly, INLINEFORM0 is the probability of generating INLINEFORM1 in the target language, which shares INLINEFORM2 with the source language. By sharing the same parameters across different languages in the encoder and decoder, both the visual features and the learned word embeddings for different languages are enforced to project in a joint semantic space. To be noted, the proposed multi-lingual parameter sharing strategy is not constrained to the presented image captioning model, but can be applied in various image captioning models such as show-tell model BIBREF15 and so on.", "We use maximum likelihood as objective function to train the multi-lingual caption model, which maximizes the log-probability of the ground-truth captions: DISPLAYFORM0 " ], [ "The proposed multi-lingual caption model can induce similarities of words in different languages from two aspects: the linguistic similarity and the visual similarity. In the following, we discuss the two types of similarity and then construct the source and target word representations.", "The linguistic similarity is reflected from the learned word embeddings INLINEFORM0 and INLINEFORM1 in the multi-lingual caption model. As shown in previous works BIBREF20 , word embeddings learned from the language contexts can capture syntactic and semantic regularities in the language. However, if the word embeddings of different languages are trained independently, they are not in the same linguistic space and we cannot compute similarities directly. In our multi-lingual caption model, since images in INLINEFORM2 and INLINEFORM3 share the same visual space, the features of sentence INLINEFORM4 and INLINEFORM5 belonging to similar images are bound to be close in the same space with the visual constraints. Meanwhile, the language decoder is also shared, which enforces the word embeddings across languages into the same semantic space in order to generate similar sentence features. Therefore, INLINEFORM6 and INLINEFORM7 not only encode the linguistic information of different languages but also share the embedding space which enables direct cross-lingual similarity comparison. We refer the linguistic features of source and target words INLINEFORM8 and INLINEFORM9 as INLINEFORM10 and INLINEFORM11 respectively.", "For the visual similarity, the multi-lingual caption model locates the image region to generate each word base on the spatial attention in Eq ( EQREF13 ), which can be used to calculate the localized visual representation of the word. However, since the attention is computed before word generation, the localization performance can be less accurate. It also cannot be generalized to image captioning models without spatial attention. Therefore, inspired by BIBREF21 , where they occlude over regions of the image to observe the change of classification probabilities, we feed different parts of the image to the caption model and investigate the probability changes for each word in the sentence. Algorithm SECREF16 presents the procedure of word localization and the grounded visual feature generation. Please note that such visual-grounding is learned unsupervisedly from the image caption data. Therefore, every word can be represented as a set of grounded visual features (the set size equals to the word occurrence number in the dataset). We refer the localized visual feature set for source word INLINEFORM0 as INLINEFORM1 , for target word INLINEFORM2 as INLINEFORM3 .", "Generating localized visual features. Encoded image features INLINEFORM0 , sentence INLINEFORM1 . Localized visual features for each word INLINEFORM2 each INLINEFORM3 compute INLINEFORM4 according to Eq ( EQREF10 ) INLINEFORM5 INLINEFORM6 INLINEFORM7 " ], [ "Since the word representations of the source and target language are in the same space, we could directly compute the similarities across languages. We apply l2-normalization on the word representations and measure with the cosine similarity. For linguistic features, the similarity is measured as: DISPLAYFORM0 ", "However, there are a set of visual features associated with one word, so the visual similarity measurement between two words is required to take two sets of visual features as input. We aggregate the visual features in a single representation and then compute cosine similarity instead of point-wise similarities among two sets: DISPLAYFORM0 ", "The reasons for performing aggregation are two folds. Firstly, the number of visual features is proportional to the word occurrence in our approach instead of fixed numbers as in BIBREF6 , BIBREF7 . So the computation cost for frequent words are much higher. Secondly, the aggregation helps to reduce noise, which is especially important for abstract words. The abstract words such as event' are more visually diverse, but the overall styles of multiple images can reflect its visual semantics.", "Due to the complementary characteristics of the two features, we combine them to predict the word translation. The translated word for INLINEFORM0 is DISPLAYFORM0 " ], [ "For image captioning, we utilize the multi30k BIBREF22 , COCO BIBREF23 and STAIR BIBREF24 datasets. The multi30k dataset contains 30k images and annotations under two tasks. In task 1, each image is annotated with one English description which is then translated into German and French. In task 2, the image is independently annotated with 5 descriptions in English and German respectively. For German and English languages, we utilize annotations in task 2. For the French language, we can only employ French descriptions in task 1, so the training size for French is less than the other two languages. The COCO and STAIR datasets contain the same image set but are independently annotated in English and Japanese. Since the images in the wild for different languages might not overlap, we randomly split the image set into two disjoint parts of equal size. The images in each part only contain the mono-lingual captions. We use Moses SMT Toolkit to tokenize sentences and select words occurring more than five times in our vocabulary for each language. Table TABREF21 summarizes the statistics of caption datasets.", "For bilingual lexicon induction, we use two visual datasets: BERGSMA and MMID. The BERGSMA dataset BIBREF5 consists of 500 German-English word translation pairs. Each word is associated with no more than 20 images. The words in BERGSMA dataset are all nouns. The MMID dataset BIBREF7 covers a larger variety of words and languages, including 9,808 German-English pairs and 9,887 French-English pairs. The source word can be mapped to multiple target words in their dictionary. Each word is associated with no more than 100 retrieved images. Since both these image datasets do not contain Japanese language, we download the Japanese-to-English dictionary online. We select words in each dataset that overlap with our caption vocabulary, which results in 230 German-English pairs in BERGSMA dataset, 1,311 German-English pairs and 1,217 French-English pairs in MMID dataset, and 2,408 Japanese-English pairs." ], [ "For the multi-lingual caption model, we set the word embedding size and the hidden size of LSTM as 512. Adam algorithm is applied to optimize the model with learning rate of 0.0001 and batch size of 128. The caption model is trained up to 100 epochs and the best model is selected according to caption performance on the validation set.", "We compare our approach with two baseline vision-based methods proposed in BIBREF6 , BIBREF7 , which measure the similarity of two sets of global visual features for bilingual lexicon induction:", "CNN-mean: taking the similarity score of the averaged feature of the two image sets.", "CNN-avgmax: taking the average of the maximum similarity scores of two image sets.", "We evaluate the word translation performance using MRR (mean-reciprocal rank) as follows: DISPLAYFORM0 ", "where INLINEFORM0 is the groundtruth translated words for source word INLINEFORM1 , and INLINEFORM2 denotes the rank of groundtruth word INLINEFORM3 in the rank list of translation candidates. We also measure the precision at K (P@K) score, which is the proportion of source words whose groundtruth translations rank in the top K words. We set K as 1, 5, 10 and 20." ], [ "We first evaluate the captioning performance of the proposed multi-lingual caption model, which serves as the foundation stone for our bilingual lexicon induction method.", "We compare the proposed multi-lingual caption model with the mono-lingual model, which consists of the same model structure, but is trained separately for each language. Table TABREF22 presents the captioning results on the multi30k dataset, where all the languages are from the Latin family. The multi-lingual caption model achieves comparable performance with mono-lingual model for data sufficient languages such as English and German, and significantly outperforms the mono-lingual model for the data-scarce language French with absolute 3.22 gains on the CIDEr metric. For languages with distinctive grammar structures such as English and Japanese, the multi-lingual model is also on par with the mono-lingual model as shown in Table TABREF29 . To be noted, the multi-lingual model contains about twice less of parameters than the independent mono-lingual models, which is more computation efficient.", "We visualize the learned visual groundings from the multi-lingual caption model in Figure FIGREF32 . Though there is certain mistakes such as musicians' in the bottom image, most of the words are grounded well with correct objects or scenes, and thus can obtain more salient visual features." ], [ "We induce the linguistic features and localized visual features from the multi-lingual caption model for word translation from the source to target languages. Table TABREF30 presents the German-to-English word translation performance of the proposed features. In the BERGSMA dataset, the visual features achieve better translation results than the linguistic features while they are inferior to the linguistic features in the MMID dataset. This is because the vocabulary in BERGSMA dataset mainly consists of nouns, but the parts-of-speech is more diverse in the MMID dataset. The visual features contribute most to translate concrete noun words, while the linguistic features are beneficial to other abstract words. The fusion of the two features performs best for word translation, which demonstrates that the two features are complementary with each other.", "We also compare our approach with previous state-of-the-art vision-based methods in Table TABREF30 . Since our visual feature is the averaged representation, it is fair to compare with the CNN-mean baseline method where the only difference lies in the feature rather than similarity measurement. The localized features perform substantially better than the global image features which demonstrate the effectiveness of the attention learned from the caption model. The combination of visual and linguistic features also significantly improves the state-of-the-art visual-based CNN-avgmax method with 11.6% and 6.7% absolute gains on P@1 on the BERGSMA and MMID dataset respectively.", "In Figure FIGREF36 , we present the word translation performance for different POS (part-of-speech) labels. We assign the POS label for words in different languages according to their translations in English. We can see that the previous state-of-the-art vision-based approach contributes mostly to noun words which are most visual-relevant, while generates poor translations for other part-of-speech words. Our approach, however, substantially improves the translation performance for all part-of-speech classes. For concrete words such as nouns and adjectives, the localized visual features produce better representation than previous global visual features; and for other part-of-speech words, the linguistic features, which are learned with sentence context, are effective to complement the visual features. The fusion of the linguistic and localized visual features in our approach leads to significant performance improvement over the state-of-the-art baseline method for all types of POS classes.", "Some correct and incorrect translation examples for different POS classes are shown in Table TABREF34 . The visual-relevant concrete words are easier to translate such as phone' and red'. But our approach still generates reasonable results for abstract words such as area' and functional words such as for' due to the fusion of visual and sentence contexts.", "We also evaluate the influence of different image captioning structures on the bilingual lexicon induction. We compare our attention model (attn') with the vanilla show-tell model (mp') BIBREF15 , which applies mean pooling over spatial image features to generate captions and achieves inferior caption performance to the attention model. Table TABREF35 shows the word translation performance of the two caption models. The attention model with better caption performance also induces better linguistic and localized visual features for bilingual lexicon induction. Nevertheless, the show-tell model still outperforms the previous vision-based methods in Table TABREF30 ." ], [ "Beside German-to-English word translation, we expand our approach to other languages including French and Japanese which is more distant from English.", "The French-to-English word translation performance is presented in Table TABREF39 . To be noted, the training data of the French captions is five times less than German captions, which makes French-to-English word translation performance less competitive with German-to-English. But similarly, the fusion of linguistic and visual features achieves the best performance, which has boosted the baseline methods with 4.2% relative gains on the MRR metric and 17.4% relative improvements on the P@20 metric.", "Table TABREF40 shows the Japanese-to-English word translation performance. Since the language structures of Japanese and English are quite different, the linguistic features learned from the multi-lingual caption model are less effective but still can benefit the visual features to improve the translation quality. The results on multiple diverse language pairs further demonstrate the generalization of our approach for different languages." ], [ "In this paper, we address the problem of bilingual lexicon induction without reliance on parallel corpora. Based on the experience that we humans can understand words better when they are within the context and can learn word translations with external world (e.g. images) as pivot, we propose a new vision-based approach to induce bilingual lexicon with images and their associated sentences. We build a multi-lingual caption model from multiple mono-lingual multimodal data to map words in different languages into joint spaces. Two types of word representation, linguistic features and localized visual features, are induced from the caption model. The two types of features are complementary for word translation. Experimental results on multiple language pairs demonstrate the effectiveness of our proposed method, which leads to significant performance improvement over the state-of-the-art vision-based approaches for all types of part-of-speech. In the future, we will further expand the vision-pivot approaches for zero-resource machine translation without parallel sentences." ], [ "This work was supported by National Natural Science Foundation of China under Grant No. 61772535, National Key Research and Development Plan under Grant No. 2016YFB1001202 and Research Foundation of Beijing Municipal Science & Technology Commission under Grant No. Z181100008918002." ] ] } { "question": [ "Which vision-based approaches does this approach outperform?", "What baseline is used for the experimental setup?", "Which languages are used in the multi-lingual caption model?" ], "question_id": [ "591231d75ff492160958f8aa1e6bfcbbcd85a776", "9e805020132d950b54531b1a2620f61552f06114", "95abda842c4df95b4c5e84ac7d04942f1250b571" ], "nlp_background": [ "five", "five", "five" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar" ], "paper_read": [ "no", "no", "no" ], "search_query": [ "irony", "irony", "irony" ], "question_writer": [ "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [ "CNN-mean", "CNN-avgmax" ], "yes_no": null, "free_form_answer": "", "evidence": [ "We compare our approach with two baseline vision-based methods proposed in BIBREF6 , BIBREF7 , which measure the similarity of two sets of global visual features for bilingual lexicon induction:", "CNN-mean: taking the similarity score of the averaged feature of the two image sets.", "CNN-avgmax: taking the average of the maximum similarity scores of two image sets." ], "highlighted_evidence": [ "We compare our approach with two baseline vision-based methods proposed in BIBREF6 , BIBREF7 , which measure the similarity of two sets of global visual features for bilingual lexicon induction:\n\nCNN-mean: taking the similarity score of the averaged feature of the two image sets.\n\nCNN-avgmax: taking the average of the maximum similarity scores of two image sets." ] } ], "annotation_id": [ "bdc283d7bd798de2ad5934ef59f1ff34e3db6d9a" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "CNN-mean", "CNN-avgmax" ], "yes_no": null, "free_form_answer": "", "evidence": [ "We compare our approach with two baseline vision-based methods proposed in BIBREF6 , BIBREF7 , which measure the similarity of two sets of global visual features for bilingual lexicon induction:", "CNN-mean: taking the similarity score of the averaged feature of the two image sets.", "CNN-avgmax: taking the average of the maximum similarity scores of two image sets." ], "highlighted_evidence": [ "We compare our approach with two baseline vision-based methods proposed in BIBREF6 , BIBREF7 , which measure the similarity of two sets of global visual features for bilingual lexicon induction:\n\nCNN-mean: taking the similarity score of the averaged feature of the two image sets.\n\nCNN-avgmax: taking the average of the maximum similarity scores of two image sets." ] } ], "annotation_id": [ "1c1034bb22669723f38cbe0bf4e9ddd20d3a62f3" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "German-English, French-English, and Japanese-English" ], "yes_no": null, "free_form_answer": "", "evidence": [ "We carry out experiments on multiple language pairs including German-English, French-English, and Japanese-English. The experimental results show that the proposed multi-lingual caption model not only achieves better caption performance than independent mono-lingual models for data-scarce languages, but also can induce the two types of features, linguistic and visual features, for different languages in joint spaces. Our proposed method consistently outperforms previous state-of-the-art vision-based bilingual word induction approaches on different languages. The contributions of this paper are as follows:" ], "highlighted_evidence": [ "We carry out experiments on multiple language pairs including German-English, French-English, and Japanese-English. " ] }, { "unanswerable": false, "extractive_spans": [ "multiple language pairs including German-English, French-English, and Japanese-English." ], "yes_no": null, "free_form_answer": "", "evidence": [ "We carry out experiments on multiple language pairs including German-English, French-English, and Japanese-English. The experimental results show that the proposed multi-lingual caption model not only achieves better caption performance than independent mono-lingual models for data-scarce languages, but also can induce the two types of features, linguistic and visual features, for different languages in joint spaces. Our proposed method consistently outperforms previous state-of-the-art vision-based bilingual word induction approaches on different languages. The contributions of this paper are as follows:" ], "highlighted_evidence": [ "We carry out experiments on multiple language pairs including German-English, French-English, and Japanese-English. The experimental results show that the proposed multi-lingual caption model not only achieves better caption performance than independent mono-lingual models for data-scarce languages, but also can induce the two types of features, linguistic and visual features, for different languages in joint spaces." ] } ], "annotation_id": [ "5d299b615630a19692ca84d5cb48298cb9d97168", "62775a4f4030a57e74e1711c20a2c7c9869075df" ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ] } 1912.13072 AraNet: A Deep Learning Toolkit for Arabic Social Media We describe AraNet, a collection of deep learning Arabic social media processing tools. Namely, we exploit an extensive host of publicly available and novel social media datasets to train bidirectional encoders from transformer models (BERT) to predict age, dialect, gender, emotion, irony, and sentiment. AraNet delivers state-of-the-art performance on a number of the cited tasks and competitively on others. In addition, AraNet has the advantage of being exclusively based on a deep learning framework and hence feature engineering free. To the best of our knowledge, AraNet is the first to performs predictions across such a wide range of tasks for Arabic NLP and thus meets a critical needs. We publicly release AraNet to accelerate research and facilitate comparisons across the different tasks. { "section_name": [ "Introduction", "Introduction ::: ", "Methods", "Data and Models ::: Age and Gender", "Data and Models ::: Age and Gender ::: ", "Data and Models ::: Dialect", "Data and Models ::: Emotion", "Data and Models ::: Irony", "Data and Models ::: Sentiment", "AraNet Design and Use", "Related Works", "Conclusion" ], "paragraphs": [ [ "The proliferation of social media has made it possible to study large online communities at scale, thus making important discoveries that can facilitate decision making, guide policies, improve health and well-being, aid disaster response, etc. The wide host of languages, languages varieties, and dialects used on social media and the nuanced differences between users of various backgrounds (e.g., different age groups, gender identities) make it especially difficult to derive sufficiently valuable insights based on single prediction tasks. For these reasons, it would be desirable to offer NLP tools that can help stitch together a complete picture of an event across different geographical regions as impacting, and being impacted by, individuals of different identities. We offer AraNet as one such tool for Arabic social media processing." ], [ "For Arabic, a collection of languages and varieties spoken by a wide population of\\sim 400$million native speakers covering a vast geographical region (shown in Figure FIGREF2), no such suite of tools currently exists. Many works have focused on sentiment analysis, e.g., BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7 and dialect identification BIBREF8, BIBREF9, BIBREF10, BIBREF11, BIBREF12, BIBREF13. However, there is generally rarity of resources on other tasks such as gender and age detection. This motivates our toolkit, which we hope can meet the current critical need for studying Arabic communities online. This is especially valuable given the waves of protests, uprisings, and revolutions that have been sweeping the region during the last decade.", "Although we create new models for tasks such as sentiment analysis and gender detection as part of AraNet, our focus is more on putting together the toolkit itself and providing strong baselines that can be compared to. Hence, although we provide some baseline models for some of the tasks, we do not explicitly compare to previous research since most existing works either exploit smaller data (and so it will not be a fair comparison), use methods pre-dating BERT (and so will likely be outperformed by our models) . For many of the tasks we model, there have not been standard benchmarks for comparisons across models. This makes it difficult to measure progress and identify areas worthy of allocating efforts and budgets. As such, by publishing our toolkit models, we believe model-based comparisons will be one way to relieve this bottleneck. For these reasons, we also package models from our recent works on dialect BIBREF12 and irony BIBREF14 as part of AraNet .", "The rest of the paper is organized as follows: In Section SECREF2 we describe our methods. In Section SECREF3, we describe or refer to published literature for the dataset we exploit for each task and provide results our corresponding model acquires. Section SECREF4 is about AraNet design and use, and we overview related works in Section SECREF5 We conclude in Section SECREF6" ], [ "Supervised BERT. Across all our tasks, we use Bidirectional Encoder Representations from Transformers (BERT). BERT BIBREF15, dispenses with recurrence and convolution. It is based on a multi-layer bidirectional Transformer encoder BIBREF16, with multi-head attention. It uses masked language models to enable pre-trained deep bidirectional representations, in addition to a binary next sentence prediction task. The pre-trained BERT can be easily fine-tuned on a wide host of sentence-level and token-level tasks. All our models are trained in a fully supervised fashion, with dialect id being the only task where we leverage semi-supervised learning. We briefly outline our semi-supervised methods next.", "Self-Training. Only for the dialect id task, we investigate augmenting our human-labeled training data with automatically-predicted data from self-training. Self-training is a wrapper method for semi-supervised learning BIBREF17, BIBREF18 where a classifier is initially trained on a (usually small) set of labeled samples$\\textbf {\\textit {D}}^{l}$, then is used to classify an unlabeled sample set$\\textbf {\\textit {D}}^{u}$. Most confident predictions acquired by the original supervised model are added to the labeled set, and the model is iteratively re-trained. We perform self-training using different confidence thresholds and choose different percentages from predicted data to add to our train. We only report best settings here and the reader is referred to our winning system on the MADAR shared task for more details on these different settings BIBREF12.", "Implementation & Models Parameters. For all our tasks, we use the BERT-Base Multilingual Cased model released by the authors . The model is trained on 104 languages (including Arabic) with 12 layer, 768 hidden units each, 12 attention heads, and has 110M parameters in entire model. The model has 119,547 shared WordPieces vocabulary, and was pre-trained on the entire Wikipedia for each language. For fine-tuning, we use a maximum sequence size of 50 tokens and a batch size of 32. We set the learning rate to$2e-5$and train for 15 epochs and choose the best model based on performance on a development set. We use the same hyper-parameters in all of our BERT models. We fine-tune BERT on each respective labeled dataset for each task. For BERT input, we apply WordPiece tokenization, setting the maximal sequence length to 50 words/WordPieces. For all tasks, we use a TensorFlow implementation. An exception is the sentiment analysis task, where we used a PyTorch implementation with the same hyper-parameters but with a learning rate$2e-6$.", "Pre-processing. Most of our training data in all tasks come from Twitter. Exceptions are in some of the datasets we use for sentiment analysis, which we point out in Section SECREF23. Our pre-processing thus incorporates methods to clean tweets, other datasets (e.g., from the news domain) being much less noisy. For pre-processing, we remove all usernames, URLs, and diacritics in the data." ], [ "Arab-Tweet. For modeling age and gender, we use Arap-Tweet BIBREF19 , which we will refer to as Arab-Tweet. Arab-tweet is a tweet dataset of 11 Arabic regions from 17 different countries. For each region, data from 100 Twitter users were crawled. Users needed to have posted at least 2,000 and were selected based on an initial list of seed words characteristic of each region. The seed list included words such as <برشة> /barsha/ ‘many’ for Tunisian Arabic and <وايد> /wayed/ ‘many’ for Gulf Arabic. BIBREF19 employed human annotators to verify that users do belong to each respective region. Annotators also assigned gender labels from the set male, female and age group labels from the set under-25, 25-to34, above-35 at the user-level, which in turn is assigned at tweet level. Tweets with less than 3 words and re-tweets were removed. Refer to BIBREF19 for details about how annotation was carried out. We provide a description of the data in Table TABREF10. Table TABREF10 also provides class breakdown across our splits.We note that BIBREF19 do not report classification models exploiting the data." ], [ "We shuffle the Arab-tweet dataset and split it into 80% training (TRAIN), 10% development (DEV), and 10% test (TEST). The distribution of classes in our splits is in Table TABREF10. For pre-processing, we reduce 2 or more consecutive repetitions of the same character into only 2 and remove diacritics. With this dataset, we train a small unidirectional GRU (small-GRU) with a single 500-units hidden layer and$dropout=0.5$as a baseline. Small-GRU is trained with the TRAIN set, batch size = 8, and up to 30 words of each sequence. Each word in the input sequence is represented as a trainable 300-dimension vector. We use the top 100K words which are weighted by mutual information as our vocabulary in the embedding layer. We evaluate the model on TEST set. Table TABREF14 show small-GRU obtain36.29% XX acc on age classification, and 53.37% acc on gender detection. We also report the accuracy of fine-tuned BERT models on TEST set in Table TABREF14. We can find that BERT models significantly perform better than our baseline on the two tasks. It improve with 15.13% (for age) and 11.93% acc (for gender) over the small-GRU.", "UBC Twitter Gender Dataset. We also develop an in-house Twitter dataset for gender. We manually labeled 1,989 users from each of the 21 Arab countries. The data had 1,246 “male\", 528 “female\", and 215 unknown users. We remove the “unknown\" category and balance the dataset to have 528 from each of the two male\" and “female\" categories. We ended with 69,509 tweets for male\" and 67,511 tweets for “female\". We split the users into 80% TRAIN set (110,750 tweets for 845 users), 10% DEV set (14,158 tweets for 106 users), and 10% TEST set (12,112 tweets for 105 users). We, then, model this dataset with BERT-Base, Multilingual Cased model and evaluate on development and test sets. Table TABREF15 shows that fine-tuned model obtains 62.42% acc on DEV and 60.54% acc on TEST.", "We also combine the Arab-tweet gender dataset with our UBC-Twitter dataset for gender on training, development, and test, respectively, to obtain new TRAIN, DEV, and TEST. We fine-tune the BERT-Base, Multilingual Cased model with the combined TRAIN and evaluate on combined DEV and TEST. As Table TABREF15 shows, the model obtains 65.32% acc on combined DEV set, and 65.32% acc on combined TEST set. This is the model we package in AraNet ." ], [ "The dialect identification model in AraNet is based on our winning system in the MADAR shared task 2 BIBREF20 as described in BIBREF12. The corpus is divided into train, dev and test, and the organizers masked test set labels. We lost some tweets from training data when we crawled using tweet ids, ultimately acquiring 2,036 (TRAIN-A), 281 (DEV) and 466 (TEST). We also make use of the task 1 corpus (95,000 sentences BIBREF21). More specifically, we concatenate the task 1 data to the training data of task 2, to create TRAIN-B. Again, note that TEST labels were only released to participants after the official task evaluation. Table TABREF17 shows statistics of the data. We used tweets from 21 Arab countries as distributed by task organizers, except that we lost some tweets when we crawled using tweet ids. We had 2,036 (TRAIN-A), 281 (DEV) and 466 (TEST). For our experiments, we also make use of the task 1 corpus (95,000 sentences BIBREF21). More specifically, we concatenate the task 1 data to the training data of task 2, to create TRAIN-B. Note that both DEV and TEST across our experiments are exclusively the data released in task 2, as described above. TEST labels were only released to participants after the official task evaluation. Table TABREF17 shows statistics of the data. More information about the data is in BIBREF21. We use TRAIN-A to perform supervised modeling with BERT and TRAIN-B for self training, under various conditions. We refer the reader to BIBREF12 for more information about our different experimental settings on dialect id. We acquire our best results with self-training, with a classification accuracy of 49.39% and F1 score at 35.44. This is the winning system model in the MADAR shared task and we showed in BIBREF12 that our tweet-level predictions can be ported to user-level prediction. On user-level detection, our models perform superbly, with 77.40% acc and 71.70% F1 score on unseen MADAR blind test data." ], [ "We make use of two datasets, the LAMA-DINA dataset from BIBREF22, a Twitter dataset with a combination of gold labels from BIBREF23 and distant supervision labels. The tweets are labeled with the Plutchik 8 primary emotions from the set: {anger, anticipation, disgust, fear, joy, sadness, surprise, trust}. The distant supervision approach depends on use of seed phrases with the Arabic first person pronoun انا> (Eng. “I\") + a seed word expressing an emotion, e.g., فرحان> (Eng. “happy\"). The manually labeled part of the data comprises tweets carrying the seed phrases verified by human annotators$9,064$tweets for inclusion of the respective emotion. The rest of the dataset is only labeled using distant supervision (LAMA-DIST) ($182,605$tweets) . For more information about the dataset, readers are referred to BIBREF22. The data distribution over the emotion classes is in Table TABREF20. We combine LAMA+DINA and LAMA-DIST training set and refer to this new training set as LAMA-D2 (189,903 tweets). We fine-tune BERT-Based, Multilingual Cased on the LAMA-D2 and evaluate the model with same DEV and TEST sets from LAMA+DINA. On DEV set, the fine-tuned BERT model obtains 61.43% on accuracy and 58.83 on$F_1$score. On TEST set, we acquire 62.38% acc and 60.32%$F_1$score." ], [ "We use the dataset for irony identification on Arabic tweets released by IDAT@FIRE2019 shared-task BIBREF24. The shared task dataset contains 5,030 tweets related to different political issues and events in the Middle East taking place between 2011 and 2018. Tweets are collected using pre-defined keywords (i.e., targeted political figures or events) and the positive class involves ironic hashtags such as #sokhria, #tahakoum, and #maskhara (Arabic variants for “irony\"). Duplicates, retweets, and non-intelligible tweets are removed by organizers. Tweets involve both MSA as well as dialects at various degrees of granularity such as Egyptian, Gulf, and Levantine.", "IDAT@FIRE2019 BIBREF24 is set up as a binary classification task where tweets are assigned labels from the set {ironic, non-ironic}. A total of 4,024 tweets were released by organizers as training data. In addition, 1,006 tweets were used by organizers as test data. Test labels were not release; and teams were expected to submit the predictions produced by their systems on the test split. For our models, we split the 4,024 released training data into 90% TRAIN ($n$=3,621 tweets; ironic'=1,882 and non-ironic'=1,739) and 10% DEV ($n$=403 tweets; ironic'=209 and non-ironic'=194). We use the same small-GRU architecture of Section 3.1 as our baselines. We fine-tune BERT-Based, Multilingual Cased model on our TRAIN, and evaluate on DEV. The small-GRU obtain 73.70% accuracy and 73.47%$F_1$score. BERT model significantly out-performance than small-GRU, which achieve 81.64% accuracy and 81.62%$F_1$score." ], [ "We collect 15 datasets related to sentiment analysis of Arabic, including MSA and dialects BIBREF25, BIBREF26, BIBREF27, BIBREF1, BIBREF28, BIBREF29, BIBREF30, BIBREF31, BIBREF32, BIBREF33, BIBREF34. Table TABREF28 shows all the corpora we use. These datasets involve different types of sentiment analysis tasks such as binary classification (i.e., negative or positive), 3-way classification (i.e., negative, neutral, or positive), and subjective language detection. To combine these datasets for binary sentiment classification, we normalize different types of label to binary labels in the set$\\lbrace positive^{\\prime }, negative^{\\prime }\\rbrace $by following rules:", "{Positive, Pos, or High-Pos} to positive';", "{Negative, Neg, or High-Neg} to negative';", "Exclude samples which label is not positive' or negative' such as obj', mixed', neut', or neutral'.", "After label normalization, we obtain 126,766 samples. We split this datase into 80% training (TRAIN), 10% development (DEV), and 10% test (TEST). The distribution of classes in our splits is presented in Table TABREF27. We fine-tune pre-trained BERT on the TRAIN set using PyTorch implementation with$2e-6$learning rate and 15 epochs, as explained in Section SECREF2. Our best model on the DEV set obtains 80.24% acc and 80.24%$F_1$. We evaluate this best model on TEST set and obtain 77.31% acc and 76.67%$F_1$." ], [ "AraNet consists of identifier tools including age, gender, dialect, emotion, irony and sentiment. Each tool comes with an embedded model. The tool comes with modules for performing normalization and tokenization. AraNet can be used as a Python library or a command-line tool:", "Python Library: Importing AraNet module as a Python library provides identifiers’ functions. Prediction is based on a text or a path to a file and returns the identified class label. It also returns the probability distribution over all available class labels if needed. Figure FIGREF34 shows two examples of using the tool as Python library.", "Command-line tool: AraNet provides scripts supporting both command-line and interactive mode. Command-line mode accepts a text or file path. Interaction mode is good for quick interactive line-by-line experiments and also pipeline redirections.", "AraNet is available through pip or from source on GitHub with detailed documentation." ], [ "As we pointed out earlier, there has been several works on some of the tasks but less on others. By far, Arabic sentiment analysis has been the most popular task. Several works have been performed for MSA BIBREF35, BIBREF0 and dialectal BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7 sentiment analysis. A number of works have also been published for dialect detection, including BIBREF9, BIBREF10, BIBREF8, BIBREF11. Some works have been performed on the tasks of age detection BIBREF19, BIBREF36, gender detection BIBREF19, BIBREF36, irony identification BIBREF37, BIBREF24, and emotion analysis BIBREF38, BIBREF22.", "A number of tools exist for Arabic natural language processing,including Penn Arabic treebank BIBREF39, POS tagger BIBREF40, BIBREF41, Buckwalter Morphological Analyzer BIBREF42 and Mazajak BIBREF7 for sentiment analysis ." ], [ "We presented AraNet, a deep learning toolkit for a host of Arabic social media processing. AraNet predicts age, dialect, gender, emotion, irony, and sentiment from social media posts. It delivers state-of-the-art and competitive performance on these tasks and has the advantage of using a unified, simple framework based on the recently-developed BERT model. AraNet has the potential to alleviate issues related to comparing across different Arabic social media NLP tasks, by providing one way to test new models against AraNet predictions. Our toolkit can be used to make important discoveries on the wide region of the Arab world, and can enhance our understating of Arab online communication. AraNet will be publicly available upon acceptance." ] ] } { "question": [ "Did they experiment on all the tasks?", "What models did they compare to?", "What datasets are used in training?" ], "question_id": [ "2419b38624201d678c530eba877c0c016cccd49f", "b99d100d17e2a121c3c8ff789971ce66d1d40a4d", "578d0b23cb983b445b1a256a34f969b34d332075" ], "nlp_background": [ "", "", "" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar" ], "paper_read": [ "no", "no", "no" ], "search_query": [ "", "", "" ], "question_writer": [ "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": true, "free_form_answer": "", "evidence": [ "Implementation & Models Parameters. For all our tasks, we use the BERT-Base Multilingual Cased model released by the authors . The model is trained on 104 languages (including Arabic) with 12 layer, 768 hidden units each, 12 attention heads, and has 110M parameters in entire model. The model has 119,547 shared WordPieces vocabulary, and was pre-trained on the entire Wikipedia for each language. For fine-tuning, we use a maximum sequence size of 50 tokens and a batch size of 32. We set the learning rate to$2e-5$and train for 15 epochs and choose the best model based on performance on a development set. We use the same hyper-parameters in all of our BERT models. We fine-tune BERT on each respective labeled dataset for each task. For BERT input, we apply WordPiece tokenization, setting the maximal sequence length to 50 words/WordPieces. For all tasks, we use a TensorFlow implementation. An exception is the sentiment analysis task, where we used a PyTorch implementation with the same hyper-parameters but with a learning rate$2e-6$.", "We presented AraNet, a deep learning toolkit for a host of Arabic social media processing. AraNet predicts age, dialect, gender, emotion, irony, and sentiment from social media posts. It delivers state-of-the-art and competitive performance on these tasks and has the advantage of using a unified, simple framework based on the recently-developed BERT model. AraNet has the potential to alleviate issues related to comparing across different Arabic social media NLP tasks, by providing one way to test new models against AraNet predictions. Our toolkit can be used to make important discoveries on the wide region of the Arab world, and can enhance our understating of Arab online communication. AraNet will be publicly available upon acceptance." ], "highlighted_evidence": [ "For all tasks, we use a TensorFlow implementation.", "AraNet predicts age, dialect, gender, emotion, irony, and sentiment from social media posts. It delivers state-of-the-art and competitive performance on these tasks and has the advantage of using a unified, simple framework based on the recently-developed BERT model. " ] } ], "annotation_id": [ "1e6f1f17079c5fd39dec880a5002fb9fc8d59412" ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ " we do not explicitly compare to previous research since most existing works either exploit smaller data (and so it will not be a fair comparison), use methods pre-dating BERT (and so will likely be outperformed by our models)" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Although we create new models for tasks such as sentiment analysis and gender detection as part of AraNet, our focus is more on putting together the toolkit itself and providing strong baselines that can be compared to. Hence, although we provide some baseline models for some of the tasks, we do not explicitly compare to previous research since most existing works either exploit smaller data (and so it will not be a fair comparison), use methods pre-dating BERT (and so will likely be outperformed by our models) . For many of the tasks we model, there have not been standard benchmarks for comparisons across models. This makes it difficult to measure progress and identify areas worthy of allocating efforts and budgets. As such, by publishing our toolkit models, we believe model-based comparisons will be one way to relieve this bottleneck. For these reasons, we also package models from our recent works on dialect BIBREF12 and irony BIBREF14 as part of AraNet ." ], "highlighted_evidence": [ "Although we create new models for tasks such as sentiment analysis and gender detection as part of AraNet, our focus is more on putting together the toolkit itself and providing strong baselines that can be compared to. Hence, although we provide some baseline models for some of the tasks, we do not explicitly compare to previous research since most existing works either exploit smaller data (and so it will not be a fair comparison), use methods pre-dating BERT (and so will likely be outperformed by our models) . For many of the tasks we model, there have not been standard benchmarks for comparisons across models. This makes it difficult to measure progress and identify areas worthy of allocating efforts and budgets." ] } ], "annotation_id": [ "9b44463b816b36f0753046936b967760414f856d" ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "Arap-Tweet BIBREF19 ", "an in-house Twitter dataset for gender", "the MADAR shared task 2 BIBREF20", "the LAMA-DINA dataset from BIBREF22", "LAMA-DIST", "Arabic tweets released by IDAT@FIRE2019 shared-task BIBREF24", "BIBREF25, BIBREF26, BIBREF27, BIBREF1, BIBREF28, BIBREF29, BIBREF30, BIBREF31, BIBREF32, BIBREF33, BIBREF34" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Arab-Tweet. For modeling age and gender, we use Arap-Tweet BIBREF19 , which we will refer to as Arab-Tweet. Arab-tweet is a tweet dataset of 11 Arabic regions from 17 different countries. For each region, data from 100 Twitter users were crawled. Users needed to have posted at least 2,000 and were selected based on an initial list of seed words characteristic of each region. The seed list included words such as <برشة> /barsha/ ‘many’ for Tunisian Arabic and <وايد> /wayed/ ‘many’ for Gulf Arabic. BIBREF19 employed human annotators to verify that users do belong to each respective region. Annotators also assigned gender labels from the set male, female and age group labels from the set under-25, 25-to34, above-35 at the user-level, which in turn is assigned at tweet level. Tweets with less than 3 words and re-tweets were removed. Refer to BIBREF19 for details about how annotation was carried out. We provide a description of the data in Table TABREF10. Table TABREF10 also provides class breakdown across our splits.We note that BIBREF19 do not report classification models exploiting the data.", "UBC Twitter Gender Dataset. We also develop an in-house Twitter dataset for gender. We manually labeled 1,989 users from each of the 21 Arab countries. The data had 1,246 “male\", 528 “female\", and 215 unknown users. We remove the “unknown\" category and balance the dataset to have 528 from each of the two male\" and “female\" categories. We ended with 69,509 tweets for male\" and 67,511 tweets for “female\". We split the users into 80% TRAIN set (110,750 tweets for 845 users), 10% DEV set (14,158 tweets for 106 users), and 10% TEST set (12,112 tweets for 105 users). We, then, model this dataset with BERT-Base, Multilingual Cased model and evaluate on development and test sets. Table TABREF15 shows that fine-tuned model obtains 62.42% acc on DEV and 60.54% acc on TEST.", "The dialect identification model in AraNet is based on our winning system in the MADAR shared task 2 BIBREF20 as described in BIBREF12. The corpus is divided into train, dev and test, and the organizers masked test set labels. We lost some tweets from training data when we crawled using tweet ids, ultimately acquiring 2,036 (TRAIN-A), 281 (DEV) and 466 (TEST). We also make use of the task 1 corpus (95,000 sentences BIBREF21). More specifically, we concatenate the task 1 data to the training data of task 2, to create TRAIN-B. Again, note that TEST labels were only released to participants after the official task evaluation. Table TABREF17 shows statistics of the data. We used tweets from 21 Arab countries as distributed by task organizers, except that we lost some tweets when we crawled using tweet ids. We had 2,036 (TRAIN-A), 281 (DEV) and 466 (TEST). For our experiments, we also make use of the task 1 corpus (95,000 sentences BIBREF21). More specifically, we concatenate the task 1 data to the training data of task 2, to create TRAIN-B. Note that both DEV and TEST across our experiments are exclusively the data released in task 2, as described above. TEST labels were only released to participants after the official task evaluation. Table TABREF17 shows statistics of the data. More information about the data is in BIBREF21. We use TRAIN-A to perform supervised modeling with BERT and TRAIN-B for self training, under various conditions. We refer the reader to BIBREF12 for more information about our different experimental settings on dialect id. We acquire our best results with self-training, with a classification accuracy of 49.39% and F1 score at 35.44. This is the winning system model in the MADAR shared task and we showed in BIBREF12 that our tweet-level predictions can be ported to user-level prediction. On user-level detection, our models perform superbly, with 77.40% acc and 71.70% F1 score on unseen MADAR blind test data.", "We make use of two datasets, the LAMA-DINA dataset from BIBREF22, a Twitter dataset with a combination of gold labels from BIBREF23 and distant supervision labels. The tweets are labeled with the Plutchik 8 primary emotions from the set: {anger, anticipation, disgust, fear, joy, sadness, surprise, trust}. The distant supervision approach depends on use of seed phrases with the Arabic first person pronoun انا> (Eng. “I\") + a seed word expressing an emotion, e.g., فرحان> (Eng. “happy\"). The manually labeled part of the data comprises tweets carrying the seed phrases verified by human annotators$9,064$tweets for inclusion of the respective emotion. The rest of the dataset is only labeled using distant supervision (LAMA-DIST) ($182,605$tweets) . For more information about the dataset, readers are referred to BIBREF22. The data distribution over the emotion classes is in Table TABREF20. We combine LAMA+DINA and LAMA-DIST training set and refer to this new training set as LAMA-D2 (189,903 tweets). We fine-tune BERT-Based, Multilingual Cased on the LAMA-D2 and evaluate the model with same DEV and TEST sets from LAMA+DINA. On DEV set, the fine-tuned BERT model obtains 61.43% on accuracy and 58.83 on$F_1$score. On TEST set, we acquire 62.38% acc and 60.32%$F_1$score.", "We use the dataset for irony identification on Arabic tweets released by IDAT@FIRE2019 shared-task BIBREF24. The shared task dataset contains 5,030 tweets related to different political issues and events in the Middle East taking place between 2011 and 2018. Tweets are collected using pre-defined keywords (i.e., targeted political figures or events) and the positive class involves ironic hashtags such as #sokhria, #tahakoum, and #maskhara (Arabic variants for “irony\"). Duplicates, retweets, and non-intelligible tweets are removed by organizers. Tweets involve both MSA as well as dialects at various degrees of granularity such as Egyptian, Gulf, and Levantine.", "We collect 15 datasets related to sentiment analysis of Arabic, including MSA and dialects BIBREF25, BIBREF26, BIBREF27, BIBREF1, BIBREF28, BIBREF29, BIBREF30, BIBREF31, BIBREF32, BIBREF33, BIBREF34. Table TABREF28 shows all the corpora we use. These datasets involve different types of sentiment analysis tasks such as binary classification (i.e., negative or positive), 3-way classification (i.e., negative, neutral, or positive), and subjective language detection. To combine these datasets for binary sentiment classification, we normalize different types of label to binary labels in the set$\\lbrace positive^{\\prime }, negative^{\\prime }\\rbrace $by following rules:" ], "highlighted_evidence": [ "Arab-Tweet. For modeling age and gender, we use Arap-Tweet BIBREF19 , which we will refer to as Arab-Tweet.", "UBC Twitter Gender Dataset. We also develop an in-house Twitter dataset for gender. We manually labeled 1,989 users from each of the 21 Arab countries.", "The dialect identification model in AraNet is based on our winning system in the MADAR shared task 2 BIBREF20 as described in BIBREF12. The corpus is divided into train, dev and test, and the organizers masked test set labels.", "We make use of two datasets, the LAMA-DINA dataset from BIBREF22, a Twitter dataset with a combination of gold labels from BIBREF23 and distant supervision labels.", "The rest of the dataset is only labeled using distant supervision (LAMA-DIST) ($182,605$tweets) . For more information about the dataset, readers are referred to BIBREF22. ", "We use the dataset for irony identification on Arabic tweets released by IDAT@FIRE2019 shared-task BIBREF24.", "We collect 15 datasets related to sentiment analysis of Arabic, including MSA and dialects BIBREF25, BIBREF26, BIBREF27, BIBREF1, BIBREF28, BIBREF29, BIBREF30, BIBREF31, BIBREF32, BIBREF33, BIBREF34. Table TABREF28 shows all the corpora we use. " ] }, { "unanswerable": false, "extractive_spans": [ " Arap-Tweet ", "UBC Twitter Gender Dataset", "MADAR ", "LAMA-DINA ", "IDAT@FIRE2019", "15 datasets related to sentiment analysis of Arabic, including MSA and dialects" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Arab-Tweet. For modeling age and gender, we use Arap-Tweet BIBREF19 , which we will refer to as Arab-Tweet. Arab-tweet is a tweet dataset of 11 Arabic regions from 17 different countries. For each region, data from 100 Twitter users were crawled. Users needed to have posted at least 2,000 and were selected based on an initial list of seed words characteristic of each region. The seed list included words such as <برشة> /barsha/ ‘many’ for Tunisian Arabic and <وايد> /wayed/ ‘many’ for Gulf Arabic. BIBREF19 employed human annotators to verify that users do belong to each respective region. Annotators also assigned gender labels from the set male, female and age group labels from the set under-25, 25-to34, above-35 at the user-level, which in turn is assigned at tweet level. Tweets with less than 3 words and re-tweets were removed. Refer to BIBREF19 for details about how annotation was carried out. We provide a description of the data in Table TABREF10. Table TABREF10 also provides class breakdown across our splits.We note that BIBREF19 do not report classification models exploiting the data.", "UBC Twitter Gender Dataset. We also develop an in-house Twitter dataset for gender. We manually labeled 1,989 users from each of the 21 Arab countries. The data had 1,246 “male\", 528 “female\", and 215 unknown users. We remove the “unknown\" category and balance the dataset to have 528 from each of the two male\" and “female\" categories. We ended with 69,509 tweets for male\" and 67,511 tweets for “female\". We split the users into 80% TRAIN set (110,750 tweets for 845 users), 10% DEV set (14,158 tweets for 106 users), and 10% TEST set (12,112 tweets for 105 users). We, then, model this dataset with BERT-Base, Multilingual Cased model and evaluate on development and test sets. Table TABREF15 shows that fine-tuned model obtains 62.42% acc on DEV and 60.54% acc on TEST.", "The dialect identification model in AraNet is based on our winning system in the MADAR shared task 2 BIBREF20 as described in BIBREF12. The corpus is divided into train, dev and test, and the organizers masked test set labels. We lost some tweets from training data when we crawled using tweet ids, ultimately acquiring 2,036 (TRAIN-A), 281 (DEV) and 466 (TEST). We also make use of the task 1 corpus (95,000 sentences BIBREF21). More specifically, we concatenate the task 1 data to the training data of task 2, to create TRAIN-B. Again, note that TEST labels were only released to participants after the official task evaluation. Table TABREF17 shows statistics of the data. We used tweets from 21 Arab countries as distributed by task organizers, except that we lost some tweets when we crawled using tweet ids. We had 2,036 (TRAIN-A), 281 (DEV) and 466 (TEST). For our experiments, we also make use of the task 1 corpus (95,000 sentences BIBREF21). More specifically, we concatenate the task 1 data to the training data of task 2, to create TRAIN-B. Note that both DEV and TEST across our experiments are exclusively the data released in task 2, as described above. TEST labels were only released to participants after the official task evaluation. Table TABREF17 shows statistics of the data. More information about the data is in BIBREF21. We use TRAIN-A to perform supervised modeling with BERT and TRAIN-B for self training, under various conditions. We refer the reader to BIBREF12 for more information about our different experimental settings on dialect id. We acquire our best results with self-training, with a classification accuracy of 49.39% and F1 score at 35.44. This is the winning system model in the MADAR shared task and we showed in BIBREF12 that our tweet-level predictions can be ported to user-level prediction. On user-level detection, our models perform superbly, with 77.40% acc and 71.70% F1 score on unseen MADAR blind test data.", "Data and Models ::: Emotion", "We make use of two datasets, the LAMA-DINA dataset from BIBREF22, a Twitter dataset with a combination of gold labels from BIBREF23 and distant supervision labels. The tweets are labeled with the Plutchik 8 primary emotions from the set: {anger, anticipation, disgust, fear, joy, sadness, surprise, trust}. The distant supervision approach depends on use of seed phrases with the Arabic first person pronoun انا> (Eng. “I\") + a seed word expressing an emotion, e.g., فرحان> (Eng. “happy\"). The manually labeled part of the data comprises tweets carrying the seed phrases verified by human annotators$9,064$tweets for inclusion of the respective emotion. The rest of the dataset is only labeled using distant supervision (LAMA-DIST) ($182,605$tweets) . For more information about the dataset, readers are referred to BIBREF22. The data distribution over the emotion classes is in Table TABREF20. We combine LAMA+DINA and LAMA-DIST training set and refer to this new training set as LAMA-D2 (189,903 tweets). We fine-tune BERT-Based, Multilingual Cased on the LAMA-D2 and evaluate the model with same DEV and TEST sets from LAMA+DINA. On DEV set, the fine-tuned BERT model obtains 61.43% on accuracy and 58.83 on$F_1$score. On TEST set, we acquire 62.38% acc and 60.32%$F_1$score.", "We use the dataset for irony identification on Arabic tweets released by IDAT@FIRE2019 shared-task BIBREF24. The shared task dataset contains 5,030 tweets related to different political issues and events in the Middle East taking place between 2011 and 2018. Tweets are collected using pre-defined keywords (i.e., targeted political figures or events) and the positive class involves ironic hashtags such as #sokhria, #tahakoum, and #maskhara (Arabic variants for “irony\"). Duplicates, retweets, and non-intelligible tweets are removed by organizers. Tweets involve both MSA as well as dialects at various degrees of granularity such as Egyptian, Gulf, and Levantine.", "We collect 15 datasets related to sentiment analysis of Arabic, including MSA and dialects BIBREF25, BIBREF26, BIBREF27, BIBREF1, BIBREF28, BIBREF29, BIBREF30, BIBREF31, BIBREF32, BIBREF33, BIBREF34. Table TABREF28 shows all the corpora we use. These datasets involve different types of sentiment analysis tasks such as binary classification (i.e., negative or positive), 3-way classification (i.e., negative, neutral, or positive), and subjective language detection. To combine these datasets for binary sentiment classification, we normalize different types of label to binary labels in the set$\\lbrace positive^{\\prime }, negative^{\\prime }\\rbrace $by following rules:" ], "highlighted_evidence": [ "For modeling age and gender, we use Arap-Tweet BIBREF19 , which we will refer to as Arab-Tweet.", "UBC Twitter Gender Dataset. We also develop an in-house Twitter dataset for gender. ", "The dialect identification model in AraNet is based on our winning system in the MADAR shared task 2 BIBREF20 as described in BIBREF12. ", "Emotion\nWe make use of two datasets, the LAMA-DINA dataset from BIBREF22, a Twitter dataset with a combination of gold labels from BIBREF23 and distant supervision labels. ", "We use the dataset for irony identification on Arabic tweets released by IDAT@FIRE2019 shared-task BIBREF24.", "We collect 15 datasets related to sentiment analysis of Arabic, including MSA and dialects BIBREF25, BIBREF26, BIBREF27, BIBREF1, BIBREF28, BIBREF29, BIBREF30, BIBREF31, BIBREF32, BIBREF33, BIBREF34." ] } ], "annotation_id": [ "5d7ee4a4ac4dfc9729570afa0aa189959fe9de25", "bd0a4cc46a4c8bef5c8b1c2f8d7e8b72d4f81308" ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] } ] } 1712.09127 Generative Adversarial Nets for Multiple Text Corpora Generative adversarial nets (GANs) have been successfully applied to the artificial generation of image data. In terms of text data, much has been done on the artificial generation of natural language from a single corpus. We consider multiple text corpora as the input data, for which there can be two applications of GANs: (1) the creation of consistent cross-corpus word embeddings given different word embeddings per corpus; (2) the generation of robust bag-of-words document embeddings for each corpora. We demonstrate our GAN models on real-world text data sets from different corpora, and show that embeddings from both models lead to improvements in supervised learning problems. { "section_name": [ "Introduction", "Literature Review", "Models and Algorithms", "weGAN: Training cross-corpus word embeddings", "deGAN: Generating document embeddings for multi-corpus text data", "Experiments", "The CNN data set", "The TIME data set", "The 20 Newsgroups data set", "The Reuters-21578 data set", "Conclusion", "Reference" ], "paragraphs": [ [ "Generative adversarial nets (GAN) (Goodfellow et al., 2014) belong to a class of generative models which are trainable and can generate artificial data examples similar to the existing ones. In a GAN model, there are two sub-models simultaneously trained: a generative model INLINEFORM0 from which artificial data examples can be sampled, and a discriminative model INLINEFORM1 which classifies real data examples and artificial ones from INLINEFORM2 . By training INLINEFORM3 to maximize its generation power, and training INLINEFORM4 to minimize the generation power of INLINEFORM5 , so that ideally there will be no difference between the true and artificial examples, a minimax problem can be established. The GAN model has been shown to closely replicate a number of image data sets, such as MNIST, Toronto Face Database (TFD), CIFAR-10, SVHN, and ImageNet (Goodfellow et al., 2014; Salimans et al. 2016).", "The GAN model has been extended to text data in a number of ways. For instance, Zhang et al. (2016) applied a long-short term memory (Hochreiter and Schmidhuber, 1997) generator and approximated discretization to generate text data. Moreover, Li et al. (2017) applied the GAN model to generate dialogues, i.e. pairs of questions and answers. Meanwhile, the GAN model can also be applied to generate bag-of-words embeddings of text data, which focus more on key terms in a text document rather than the original document itself. Glover (2016) provided such a model with the energy-based GAN (Zhao et al., 2017).", "To the best of our knowledge, there has been no literature on applying the GAN model to multiple corpora of text data. Multi-class GANs (Liu and Tuzel, 2016; Mirza and Osindero, 2014) have been proposed, but a class in multi-class classification is not the same as multiple corpora. Because knowing the underlying corpus membership of each text document can provide better information on how the text documents are organized, and documents from the same corpus are expected to share similar topics or key words, considering the membership information can benefit the training of a text model from a supervised perspective. We consider two problems associated with training multi-corpus text data: (1) Given a separate set of word embeddings from each corpus, such as the word2vec embeddings (Mikolov et al., 2013), how to obtain a better set of cross-corpus word embeddings from them? (2) How to incorporate the generation of document embeddings from different corpora in a single GAN model?", "For the first problem, we train a GAN model which discriminates documents represented by different word embeddings, and train the cross-corpus word embedding so that it is similar to each existing word embedding per corpus. For the second problem, we train a GAN model which considers both cross-corpus and per-corpus “topics” in the generator, and applies a discriminator which considers each original and artificial document corpus. We also show that with sufficient training, the distribution of the artificial document embeddings is equivalent to the original ones. Our work has the following contributions: (1) we extend GANs to multiple corpora of text data, (2) we provide applications of GANs to finetune word embeddings and to create robust document embeddings, and (3) we establish theoretical convergence results of the multi-class GAN model.", "Section 2 reviews existing GAN models related to this paper. Section 3 describes the GAN models on training cross-corpus word embeddings and generating document embeddings for each corpora, and explains the associated algorithms. Section 4 presents the results of the two models on text data sets, and transfers them to supervised learning. Section 5 summarizes the results and concludes the paper." ], [ "In a GAN model, we assume that the data examples INLINEFORM0 are drawn from a distribution INLINEFORM1 , and the artificial data examples INLINEFORM2 are transformed from the noise distribution INLINEFORM3 . The binary classifier INLINEFORM4 outputs the probability of a data example (or an artificial one) being an original one. We consider the following minimax problem DISPLAYFORM0 ", "With sufficient training, it is shown in Goodfellow et al. (2014) that the distribution of artificial data examples INLINEFORM0 is eventually equivalent to the data distribution INLINEFORM1 , i.e. INLINEFORM2 .", "Because the probabilistic structure of a GAN can be unstable to train, the Wasserstein GAN (Arjovsky et al., 2017) is proposed which applies a 1-Lipschitz function as a discriminator. In a Wasserstein GAN, we consider the following minimax problem DISPLAYFORM0 ", "These GANs are for the general purpose of learning the data distribution in an unsupervised way and creating perturbed data examples resembling the original ones. We note that in many circumstances, data sets are obtained with supervised labels or categories, which can add explanatory power to unsupervised models such as the GAN. We summarize such GANs because a corpus can be potentially treated as a class. The main difference is that classes are purely for the task of classification while we are interested in embeddings that can be used for any supervised or unsupervised task.", "For instance, the CoGAN (Liu and Tuzel, 2016) considers pairs of data examples from different categories as follows INLINEFORM0 ", " where the weights of the first few layers of INLINEFORM0 and INLINEFORM1 (i.e. close to INLINEFORM2 ) are tied. Mirza and Osindero (2014) proposed the conditional GAN where the generator INLINEFORM3 and the discriminator INLINEFORM4 depend on the class label INLINEFORM5 . While these GANs generate samples resembling different classes, other variations of GANs apply the class labels for semi-supervised learning. For instance, Salimans et al. (2016) proposed the following objective DISPLAYFORM0 ", "where INLINEFORM0 has INLINEFORM1 classes plus the INLINEFORM2 -th artificial class. Similar models can be found in Odena (2016), the CatGAN in Springenberg (2016), and the LSGAN in Mao et al. (2017). However, all these models consider only images and do not produce word or document embeddings, therefore being different from our models.", "For generating real text, Zhang et al. (2016) proposed textGAN in which the generator has the following form, DISPLAYFORM0 ", "where INLINEFORM0 is the noise vector, INLINEFORM1 is the generated sentence, INLINEFORM2 are the words, and INLINEFORM3 . A uni-dimensional convolutional neural network (Collobert et al, 2011; Kim, 2014) is applied as the discriminator. Also, a weighted softmax function is applied to make the argmax function differentiable. With textGAN, sentences such as “we show the efficacy of our new solvers, making it up to identify the optimal random vector...” can be generated. Similar models can also be found in Wang et al. (2016), Press et al. (2017), and Rajeswar et al. (2017). The focus of our work is to summarize information from longer documents, so we apply document embeddings such as the tf-idf to represent the documents rather than to generate real text.", "For generating bag-of-words embeddings of text, Glover (2016) proposed the following model DISPLAYFORM0 ", "and INLINEFORM0 is the mean squared error of a de-noising autoencoder, and INLINEFORM1 is the one-hot word embedding of a document. Our models are different from this model because we consider tf-idf document embeddings for multiple text corpora in the deGAN model (Section 3.2), and weGAN (Section 3.1) can be applied to produce word embeddings. Also, we focus on robustness based on several corpora, while Glover (2016) assumed a single corpus.", "For extracting word embeddings given text data, Mikolov et al. (2013) proposed the word2vec model, for which there are two variations: the continuous bag-of-words (cBoW) model (Mikolov et al., 2013b), where the neighboring words are used to predict the appearance of each word; the skip-gram model, where each neighboring word is used individually for prediction. In GloVe (Pennington et al., 2013), a bilinear regression model is trained on the log of the word co-occurrence matrix. In these models, the weights associated with each word are used as the embedding. For obtaining document embeddings, the para2vec model (Le and Mikolov, 2014) adds per-paragraph vectors to train word2vec-type models, so that the vectors can be used as embeddings for each paragraph. A simpler approach by taking the average of the embeddings of each word in a document and output the document embedding is exhibited in Socher et al. (2013)." ], [ "Suppose we have a number of different corpora INLINEFORM0 , which for example can be based on different categories or sentiments of text documents. We suppose that INLINEFORM1 , INLINEFORM2 , where each INLINEFORM3 represents a document. The words in all corpora are collected in a dictionary, and indexed from 1 to INLINEFORM4 . We name the GAN model to train cross-corpus word embeddings as “weGAN,” where “we” stands for “word embeddings,” and the GAN model to generate document embeddings for multiple corpora as “deGAN,” where “de” stands for “document embeddings.”" ], [ "We assume that for each corpora INLINEFORM0 , we are given word embeddings for each word INLINEFORM1 , where INLINEFORM2 is the dimension of each word embedding. We are also given a classification task on documents that is represented by a parametric model INLINEFORM3 taking document embeddings as feature vectors. We construct a GAN model which combines different sets of word embeddings INLINEFORM4 , INLINEFORM5 , into a single set of word embeddings INLINEFORM6 . Note that INLINEFORM7 are given but INLINEFORM8 is trained. Here we consider INLINEFORM9 as the generator, and the goal of the discriminator is to distinguish documents represented by the original embeddings INLINEFORM10 and the same documents represented by the new embeddings INLINEFORM11 .", "Next we describe how the documents are represented by a set of embeddings INLINEFORM0 and INLINEFORM1 . For each document INLINEFORM2 , we define its document embedding with INLINEFORM3 as follows, DISPLAYFORM0 ", "where INLINEFORM0 can be any mapping. Similarly, we define the document embedding of INLINEFORM1 with INLINEFORM2 as follows, with INLINEFORM3 trainable DISPLAYFORM0 ", "In a typical example, word embeddings would be based on word2vec or GLoVe. Function INLINEFORM0 can be based on tf-idf, i.e. INLINEFORM1 where INLINEFORM2 is the word embedding of the INLINEFORM3 -th word in the INLINEFORM4 -th corpus INLINEFORM5 and INLINEFORM6 is the tf-idf representation of the INLINEFORM7 -th document INLINEFORM8 in the INLINEFORM9 -th corpus INLINEFORM10 .", "To train the GAN model, we consider the following minimax problem DISPLAYFORM0 ", "where INLINEFORM0 is a discriminator of whether a document is original or artificial. Here INLINEFORM1 is the label of document INLINEFORM2 with respect to classifier INLINEFORM3 , and INLINEFORM4 is a unit vector with only the INLINEFORM5 -th component being one and all other components being zeros. Note that INLINEFORM6 is equivalent to INLINEFORM7 , but we use the former notation due to its brevity.", "The intuition of problem (8) is explained as follows. First we consider a discriminator INLINEFORM0 which is a feedforward neural network (FFNN) with binary outcomes, and classifies the document embeddings INLINEFORM1 against the original document embeddings INLINEFORM2 . Discriminator INLINEFORM3 minimizes this classification error, i.e. it maximizes the log-likelihood of INLINEFORM4 having label 0 and INLINEFORM5 having label 1. This corresponds to DISPLAYFORM0 ", "For the generator INLINEFORM0 , we wish to minimize (8) against INLINEFORM1 so that we can apply the minimax strategy, and the combined word embeddings INLINEFORM2 would resemble each set of word embeddings INLINEFORM3 . Meanwhile, we also consider classifier INLINEFORM4 with INLINEFORM5 outcomes, and associates INLINEFORM6 with label INLINEFORM7 , so that the generator INLINEFORM8 can learn from the document labeling in a semi-supervised way.", "If the classifier INLINEFORM0 outputs a INLINEFORM1 -dimensional softmax probability vector, we minimize the following against INLINEFORM2 , which corresponds to (8) given INLINEFORM3 and INLINEFORM4 : DISPLAYFORM0 ", "For the classifier INLINEFORM0 , we also minimize its negative log-likelihood DISPLAYFORM0 ", "Assembling (9-11) together, we retrieve the original minimax problem (8).", "We train the discriminator and the classifier, INLINEFORM0 , and the combined embeddings INLINEFORM1 according to (9-11) iteratively for a fixed number of epochs with the stochastic gradient descent algorithm, until the discrimination and classification errors become stable.", "The algorithm for weGAN is summarized in Algorithm 1, and Figure 1 illustrates the weGAN model.", "", " Algorithm 1. Train INLINEFORM0 based on INLINEFORM1 from all corpora INLINEFORM2 . Randomly initialize the weights and biases of the classifier INLINEFORM3 and discriminator INLINEFORM4 . Until maximum number of iterations reached Update INLINEFORM5 and INLINEFORM6 according to (9) and (11) given a mini-batch INLINEFORM7 of training examples INLINEFORM8 . Update INLINEFORM9 according to (10) given a mini-batch INLINEFORM10 of training examples INLINEFORM11 . Output INLINEFORM12 as the cross-corpus word embeddings. ", "", "", "", "", "", "", "" ], [ "In this section, our goal is to generate document embeddings which would resemble real document embeddings in each corpus INLINEFORM0 , INLINEFORM1 . We construct INLINEFORM2 generators, INLINEFORM3 so that INLINEFORM4 generate artificial examples in corpus INLINEFORM5 . As in Section 3.1, there is a certain document embedding such as tf-idf, bag-of-words, or para2vec. Let INLINEFORM6 . We initialize a noise vector INLINEFORM7 , where INLINEFORM8 , and INLINEFORM9 is any noise distribution.", "For a generator INLINEFORM0 represented by its parameters, we first map the noise vector INLINEFORM1 to the hidden layer, which represents different topics. We consider two hidden vectors, INLINEFORM2 for general topics and INLINEFORM3 for specific topics per corpus, DISPLAYFORM0 ", "Here INLINEFORM0 represents a nonlinear activation function. In this model, the bias term can be ignored in order to prevent the “mode collapse” problem of the generator. Having the hidden vectors, we then map them to the generated document embedding with another activation function INLINEFORM1 , DISPLAYFORM0 ", "To summarize, we may represent the process from noise to the document embedding as follows, DISPLAYFORM0 ", "Given the generated document embeddings INLINEFORM0 , we consider the following minimax problem to train the generator INLINEFORM1 and the discriminator INLINEFORM2 : INLINEFORM3 INLINEFORM4 ", "Here we assume that any document embedding INLINEFORM0 in corpus INLINEFORM1 is a sample with respect to the probability density INLINEFORM2 . Note that when INLINEFORM3 , the discriminator part of our model is equivalent to the original GAN model.", "To explain (15), first we consider the discriminator INLINEFORM0 . Because there are multiple corpora of text documents, here we consider INLINEFORM1 categories as output of INLINEFORM2 , from which categories INLINEFORM3 represent the original corpora INLINEFORM4 , and categories INLINEFORM5 represent the generated document embeddings (e.g. bag-of-words) from INLINEFORM6 . Assume the discriminator INLINEFORM7 , a feedforward neural network, outputs the distribution of a text document being in each category. We maximize the log-likelihood of each document being in the correct category against INLINEFORM8 DISPLAYFORM0 ", "Such a classifier does not only classifies text documents into different categories, but also considers INLINEFORM0 “fake” categories from the generators. When training the generators INLINEFORM1 , we minimize the following which makes a comparison between the INLINEFORM2 -th and INLINEFORM3 -th categories DISPLAYFORM0 ", "The intuition of (17) is that for each generated document embedding INLINEFORM0 , we need to decrease INLINEFORM1 , which is the probability of the generated embedding being correctly classified, and increase INLINEFORM2 , which is the probability of the generated embedding being classified into the target corpus INLINEFORM3 . The ratio in (17) reflects these two properties.", "We iteratively train (16) and (17) until the classification error of INLINEFORM0 becomes stable. The algorithm for deGAN is summarized in Algorithm 2, and Figure 2 illustrates the deGAN model..", "", " Algorithm 2. Randomly initialize the weights of INLINEFORM0 . Initialize the discriminator INLINEFORM1 with the weights of the first layer (which takes document embeddings as the input) initialized by word embeddings, and other parameters randomly initialized. Until maximum number of iterations reached Update INLINEFORM2 according to (16) given a mini-batch of training examples INLINEFORM3 and samples from noise INLINEFORM4 . Update INLINEFORM5 according to (17) given a mini-batch of training examples INLINEFORM6 and samples form noise INLINEFORM7 . Output INLINEFORM8 as generators of document embeddings and INLINEFORM9 as a corpus classifier. ", "", "", "", "", "", "", "", "We next show that from (15), the distributions of the document embeddings from the optimal INLINEFORM0 are equal to the data distributions of INLINEFORM1 , which is a generalization of Goodfellow et al. (2014) to the multi-corpus scenario.", "Proposition 1. Let us assume that the random variables INLINEFORM0 are continuous with probability density INLINEFORM1 which have bounded support INLINEFORM2 ; INLINEFORM3 is a continuous random variable with bounded support and activations INLINEFORM4 and INLINEFORM5 are continuous; and that INLINEFORM6 are solutions to (15). Then INLINEFORM7 , the probability density of the document embeddings from INLINEFORM8 , INLINEFORM9 , are equal to INLINEFORM10 .", "Proof. Since INLINEFORM0 is bounded, all of the integrals exhibited next are well-defined and finite. Since INLINEFORM1 , INLINEFORM2 , and INLINEFORM3 are continuous, it follows that for any parameters, INLINEFORM4 is a continuous random variable with probability density INLINEFORM5 with finite support.", "From the first line of (15), INLINEFORM0 ", " This problem reduces to INLINEFORM0 subject to INLINEFORM1 , the solution of which is INLINEFORM2 , INLINEFORM3 . Therefore, the solution to (18) is DISPLAYFORM0 ", "We then obtain from the second line of (15) that INLINEFORM0 ", " From non-negativity of the Kullback-Leibler divergence, we conclude that INLINEFORM0 " ], [ "In the experiments, we consider four data sets, two of them newly created and the remaining two already public: CNN, TIME, 20 Newsgroups, and Reuters-21578. The code and the two new data sets are available at github.com/baiyangwang/emgan. For the pre-processing of all the documents, we transformed all characters to lower case, stemmed the documents, and ran the word2vec model on each corpora to obtain word embeddings with a size of 300. In all subsequent models, we only consider the most frequent INLINEFORM0 words across all corpora in a data set.", "The document embedding in weGAN is the tf-idf weighted word embedding transformed by the INLINEFORM0 activation, i.e. DISPLAYFORM0 ", "For deGAN, we use INLINEFORM0 -normalized tf-idf as the document embedding because it is easier to interpret than the transformed embedding in (20).", "For weGAN, the cross-corpus word embeddings are initialized with the word2vec model trained from all documents. For training our models, we apply a learning rate which increases linearly from INLINEFORM0 to INLINEFORM1 and train the models for 100 epochs with a batch size of 50 per corpus. The classifier INLINEFORM2 has a single hidden layer with 50 hidden nodes, and the discriminator with a single hidden layer INLINEFORM3 has 10 hidden nodes. All these parameters have been optimized. For the labels INLINEFORM4 in (8), we apply corpus membership of each document.", "For the noise distribution INLINEFORM0 for deGAN, we apply the uniform distribution INLINEFORM1 . In (14) for deGAN, INLINEFORM2 and INLINEFORM3 so that the model outputs document embedding vectors which are comparable to INLINEFORM4 -normalized tf-idf vectors for each document. For the discriminator INLINEFORM5 of deGAN, we apply the word2vec embeddings based on all corpora to initialize its first layer, followed by another hidden layer of 50 nodes. For the discriminator INLINEFORM6 , we apply a learning rate of INLINEFORM7 , and for the generator INLINEFORM8 , we apply a learning rate of INLINEFORM9 , because the initial training phase of deGAN can be unstable. We also apply a batch size of 50 per corpus. For the softmax layers of deGAN, we initialize them with the log of the topic-word matrix in latent Dirichlet allocation (LDA) (Blei et al., 2003) in order to provide intuitive estimates.", "For weGAN, we consider two metrics for comparing the embeddings trained from weGAN and those trained from all documents: (1) applying the document embeddings to cluster the documents into INLINEFORM0 clusters with the K-means algorithm, and calculating the Rand index (RI) (Rand, 1971) against the original corpus membership; (2) finetuning the classifier INLINEFORM1 and comparing the classification error against an FFNN of the same structure initialized with word2vec (w2v). For deGAN, we compare the performance of finetuning the discriminator of deGAN for document classification, and the performance of the same FFNN. Each supervised model is trained for 500 epochs and the validation data set is used to choose the best epoch." ], [ "In the CNN data set, we collected all news links on www.cnn.com in the GDELT 1.0 Event Database from April 1st, 2013 to July 7, 2017. We then collected the news articles from the links, and kept those belonging to the three largest categories: “politics,” “world,” and “US.” We then divided these documents into INLINEFORM0 training documents, from which INLINEFORM1 validation documents are held out, and INLINEFORM2 testing documents.", "We hypothesize that because weGAN takes into account document labels in a semi-supervised way, the embeddings trained from weGAN can better incorporate the labeling information and therefore, produce document embeddings which are better separated. The results are shown in Table 1 and averaged over 5 randomized runs. Performing the Welch's t-test, both changes after weGAN training are statistically significant at a INLINEFORM0 significance level. Because the Rand index captures matching accuracy, we observe from the Table 1 that weGAN tends to improve both metrics.", " Meanwhile, we also wish to observe the spatial structure of the trained embeddings, which can be explored by the synonyms of each word measured by the cosine similarity. On average, the top 10 synonyms of each word differ by INLINEFORM0 word after weGAN training, and INLINEFORM1 of all words have different top 10 synonyms after training. Therefore, weGAN tends to provide small adjustments rather than structural changes. Table 2 lists the 10 most similar terms of three terms, “Obama,” “Trump,” and “U.S.,” before and after weGAN training, ordered by cosine similarity.", "", "We observe from Table 2 that for “Obama,” ”Trump” and “Tillerson” are more similar after weGAN training, which means that the structure of the weGAN embeddings can be more up-to-date. For “Trump,” we observe that “Clinton” is not among the synonyms before, but is after, which shows that the synonyms after are more relevant. For “U.S.,” we observe that after training, “American” replaces “British” in the list of synonyms, which is also more relevant.", "We next discuss deGAN. In Table 3, we compare the performance of finetuning the discriminator of deGAN for document classification, and the performance of the FFNN initialized with word2vec. The change is also statistically significant at the INLINEFORM0 level. From Table 3, we observe that deGAN improves the accuracy of supervised learning.", "", "To compare the generated samples from deGAN with the original bag-of-words, we randomly select one record in each original and artificial corpus. The records are represented by the most frequent words sorted by frequency in descending order where the stop words are removed. The bag-of-words embeddings are shown in Table 4.", "", "From Table 4, we observe that the bag-of-words embeddings of the original documents tend to contain more name entities, while those of the artificial deGAN documents tend to be more general. There are many additional examples not shown here with observed artificial bag-of-words embeddings having many name entities such as “Turkey,” “ISIS,” etc. from generated documents, e.g. “Syria eventually ISIS U.S. details jet aircraft October video extremist...”", "We also perform dimensional reduction using t-SNE (van der Maaten and Hinton, 2008), and plot 100 random samples from each original or artificial category. The original samples are shown in red and the generated ones are shown in blue in Figure 3. We do not further distinguish the categories because there is no clear distinction between the three original corpora, “politics,” “world,” and “US.” The results are shown in Figure 3.", "We observe that the original and artificial examples are generally mixed together and not well separable, which means that the artificial examples are similar to the original ones. However, we also observe that the artificial samples tend to be more centered and have no outliers (represented by the outermost red oval)." ], [ "In the TIME data set, we collected all news links on time.com in the GDELT 1.0 Event Database from April 1st, 2013 to July 7, 2017. We then collected the news articles from the links, and kept those belonging to the five largest categories: “Entertainment,” “Ideas,” “Politics,” “US,” and “World.” We divided these documents into INLINEFORM0 training documents, from which INLINEFORM1 validation documents are held out, and INLINEFORM2 testing documents.", "Table 5 compares the clustering results of word2vec and weGAN, and the classification accuracy of an FFNN initialized with word2vec, finetuned weGAN, and finetuned deGAN. The results in Table 5 are the counterparts of Table 1 and Table 3 for the TIME data set. The differences are also significant at the INLINEFORM0 level.", "", "From Table 5, we observe that both GAN models yield improved performance of supervised learning. For weGAN, on an average, the top 10 synonyms of each word differ by INLINEFORM0 word after weGAN training, and INLINEFORM1 of all words have different top 10 synonyms after training. We also compare the synonyms of the same common words, “Obama,” “Trump,” and “U.S.,” which are listed in Table 6.", "", "In the TIME data set, for “Obama,” “Reagan” is ranked slightly higher as an American president. For “Trump,” “Bush” and “Sanders” are ranked higher as American presidents or candidates. For “U.S.,” we note that “Pentagon” is ranked higher after weGAN training, which we think is also reasonable because the term is closely related to the U.S. government.", "For deGAN, we also compare the original and artificial samples in terms of the highest probability words. Table 7 shows one record for each category.", "", "From Table 7, we observe that the produced bag-of-words are generally alike, and the words in the same sample are related to each other to some extent.", "We also perform dimensional reduction using t-SNE for 100 examples per corpus and plot them in Figure 4. We observe that the points are generated mixed but deGAN cannot reproduce the outliers." ], [ "The 20 Newsgroups data set is a collection of news documents with 20 categories. To reduce the number of categories so that the GAN models are more compact and have more samples per corpus, we grouped the documents into 6 super-categories: “religion,” “computer,” “cars,” “sport,” “science,” and “politics” (“misc” is ignored because of its noisiness). We considered each super-category as a different corpora. We then divided these documents into INLINEFORM0 training documents, from which INLINEFORM1 validation documents are held out, and INLINEFORM2 testing documents. We train weGAN and deGAN in the the beginning of Section 4, except that we use a learning rate of INLINEFORM3 for the discriminator in deGAN to stabilize the cost function. Table 8 compares the clustering results of word2vec and weGAN, and the classification accuracy of the FFNN initialized with word2vec, finetuned weGAN, and finetuned deGAN. All comparisons are statistically significant at the INLINEFORM4 level. The other results are similar to the previous two data sets and are thereby omitted here.", "" ], [ "The Reuters-21578 data set is a collection of newswire articles. Because the data set is highly skewed, we considered the eight categories with more than 100 training documents: “earn,” “acq,” “crude,” “trade,” “money-fx,” “interest,” “money-supply,” and “ship.” We then divided these documents into INLINEFORM0 training documents, from which 692 validation documents are held out, and INLINEFORM1 testing documents. We train weGAN and deGAN in the same way as in the 20 Newsgroups data set. Table 9 compares the clustering results of word2vec and weGAN, and the classification accuracy of the FFNN initialized with word2vec, finetuned weGAN, and finetuned deGAN. All comparisons are statistically significant at the INLINEFORM2 level except the Rand index. The other results are similar to the CNN and TIME data sets and are thereby omitted here.", "" ], [ "In this paper, we have demonstrated the application of the GAN model on text data with multiple corpora. We have shown that the GAN model is not only able to generate images, but also able to refine word embeddings and generate document embeddings. Such models can better learn the inner structure of multi-corpus text data, and also benefit supervised learning. The improvements in supervised learning are not large but statistically significant. The weGAN model outperforms deGAN in terms of supervised learning for 3 out of 4 data sets, and is thereby recommended. The synonyms from weGAN also tend to be more relevant than the original word2vec model. The t-SNE plots show that our generated document embeddings are similarly distributed as the original ones." ], [ "M. Arjovsky, S. Chintala, and L. Bottou. (2017). Wasserstein GAN. arXiv:1701.07875.", "D. Blei, A. Ng, and M. Jordan. (2003). Latent Dirichlet Allocation. Journal of Machine Learning Research. 3:993-1022.", "R. Collobert, J. Weston, L. Bottou, M. Karlen, K. Kavukcuoglu, and P. Kuksa. (2011). Natural Language Processing (Almost) from Scratch. Journal of Machine Learning Research. 12:2493-2537.", "I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. (2014). Generative Adversarial Nets. In Advances in Neural Information Processing Systems 27 (NIPS 2014).", "J. Glover. (2016). Modeling documents with Generative Adversarial Networks. In Workshop on Adversarial Training (NIPS 2016).", "S. Hochreiter and J. Schmidhuber. (1997). Long Short-term Memory. In Neural Computation, 9:1735-1780.", "Y. Kim. Convolutional Neural Networks for Sentence Classification. (2014). In The 2014 Conference on Empirical Methods on Natural Language Processing (EMNLP 2014).", "Q. Le and T. Mikolov. (2014). Distributed Representations of Sentences and Documents. In Proceedings of the 31st International Conference on Machine Learning (ICML 2014).", "J. Li, W. Monroe, T. Shi, A. Ritter, and D. Jurafsky. (2017). Adversarial Learning for Neural Dialogue Generation. arXiv:1701.06547.", "M.-Y. Liu, and O. Tuzel. (2016). Coupled Generative Adversarial Networks. In Advances in Neural Information Processing Systems 29 (NIPS 2016).", "X. Mao, Q. Li, H. Xie, R. Lau, Z. Wang, and S. Smolley. (2017). Least Squares Generative Adversarial Networks. arXiv:1611.04076.", "T. Mikolov, I. Sutskever, K. Chen, G. Corrado, and J. Dean. (2013). Distributed Embeddings of Words and Phrases and Their Compositionality. In Advances in Neural Information Processing Systems 26 (NIPS 2013).", "T. Mikolov, K. Chen, G. Corrado, and J. Dean. (2013b). Efficient Estimation of Word Representations in Vector Space. In Workshop (ICLR 2013).", "M. Mirza, S. Osindero. (2014). Conditional Generative Adversarial Nets. arXiv:1411.1784.", "A. Odena. (2016). Semi-supervised Learning with Generative Adversarial Networks. arXiv:1606. 01583.", "J. Pennington, R. Socher, and C. Manning. Glove: Global vectors for word representation. (2014). In Empirical Methods in Natural Language Processing (EMNLP 2014).", "O. Press, A. Bar, B. Bogin, J. Berant, and L. Wolf. (2017). Language Generation with Recurrent Generative Adversarial Networks without Pre-training. In 1st Workshop on Subword and Character level models in NLP (EMNLP 2017).", "S. Rajeswar, S. Subramanian, F. Dutil, C. Pal, and A. Courville. (2017). Adversarial Generation of Natural Language. arXiv:1705.10929.", "W. Rand. (1971). Objective Criteria for the Evaluation of Clustering Methods. Journal of the American Statistical Association, 66:846-850.", "T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, X. Chen. (2016). Improved Techniques for Training GANs. In Advances in Neural Information Processing Systems 29 (NIPS 2016).", "R. Socher, A. Perelygin, Alex, J. Wu, J. Chuang, C. Manning, A. Ng, and C. Potts. (2013) Recursive deep models for semantic compositionality over a sentiment treebank. In Conference on Empirical Methods in Natural Language Processing (EMNLP 2013).", "J. Springenberg. (2016). Unsupervised and Semi-supervised Learning with Categorical Generative Adversarial Networks. In 4th International Conference on Learning embeddings (ICLR 2016).", "L. van der Maaten, and G. Hinton. (2008). Visualizing Data using t-SNE. Journal of Machine Learning Research, 9:2579-2605.", "B. Wang, K. Liu, and J. Zhao. (2016). Conditional Generative Adversarial Networks for Commonsense Machine Comprehension. In Twenty-Sixth International Joint Conference on Artificial Intelligence (IJCAI-17).", "Y. Zhang, Z. Gan, and L. Carin. (2016). Generating Text via Adversarial Training. In Workshop on Adversarial Training (NIPS 2016).", "J. Zhao, M. Mathieu, and Y. LeCun. (2017). Energy-based Generative Adversarial Networks. In 5th International Conference on Learning embeddings (ICLR 2017)." ] ] } { "question": [ "Which GAN do they use?", "Do they evaluate grammaticality of generated text?", "Which corpora do they use?" ], "question_id": [ "6548db45fc28e8a8b51f114635bad14a13eaec5b", "4c4f76837d1329835df88b0921f4fe8bda26606f", "819d2e97f54afcc7cdb3d894a072bcadfba9b747" ], "nlp_background": [ "infinity", "infinity", "infinity" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar" ], "paper_read": [ "no", "no", "no" ], "search_query": [ "", "", "" ], "question_writer": [ "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [ "We construct a GAN model which combines different sets of word embeddings INLINEFORM4 , INLINEFORM5 , into a single set of word embeddings INLINEFORM6 . " ], "yes_no": null, "free_form_answer": "", "evidence": [ "We assume that for each corpora INLINEFORM0 , we are given word embeddings for each word INLINEFORM1 , where INLINEFORM2 is the dimension of each word embedding. We are also given a classification task on documents that is represented by a parametric model INLINEFORM3 taking document embeddings as feature vectors. We construct a GAN model which combines different sets of word embeddings INLINEFORM4 , INLINEFORM5 , into a single set of word embeddings INLINEFORM6 . Note that INLINEFORM7 are given but INLINEFORM8 is trained. Here we consider INLINEFORM9 as the generator, and the goal of the discriminator is to distinguish documents represented by the original embeddings INLINEFORM10 and the same documents represented by the new embeddings INLINEFORM11 ." ], "highlighted_evidence": [ "We assume that for each corpora INLINEFORM0 , we are given word embeddings for each word INLINEFORM1 , where INLINEFORM2 is the dimension of each word embedding. We are also given a classification task on documents that is represented by a parametric model INLINEFORM3 taking document embeddings as feature vectors. We construct a GAN model which combines different sets of word embeddings INLINEFORM4 , INLINEFORM5 , into a single set of word embeddings INLINEFORM6 . Note that INLINEFORM7 are given but INLINEFORM8 is trained. Here we consider INLINEFORM9 as the generator, and the goal of the discriminator is to distinguish documents represented by the original embeddings INLINEFORM10 and the same documents represented by the new embeddings INLINEFORM11 ." ] }, { "unanswerable": false, "extractive_spans": [ "weGAN", "deGAN" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Suppose we have a number of different corpora INLINEFORM0 , which for example can be based on different categories or sentiments of text documents. We suppose that INLINEFORM1 , INLINEFORM2 , where each INLINEFORM3 represents a document. The words in all corpora are collected in a dictionary, and indexed from 1 to INLINEFORM4 . We name the GAN model to train cross-corpus word embeddings as “weGAN,” where “we” stands for “word embeddings,” and the GAN model to generate document embeddings for multiple corpora as “deGAN,” where “de” stands for “document embeddings.”" ], "highlighted_evidence": [ "We name the GAN model to train cross-corpus word embeddings as “weGAN,” where “we” stands for “word embeddings,” and the GAN model to generate document embeddings for multiple corpora as “deGAN,” where “de” stands for “document embeddings.”" ] } ], "annotation_id": [ "544fe9ca42dab45cdbc085a5ba35c5f5a543ac46", "ca3d3530b5cba547a09ca34495db2fd0e2fb5a3e" ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": false, "free_form_answer": "", "evidence": [ "We hypothesize that because weGAN takes into account document labels in a semi-supervised way, the embeddings trained from weGAN can better incorporate the labeling information and therefore, produce document embeddings which are better separated. The results are shown in Table 1 and averaged over 5 randomized runs. Performing the Welch's t-test, both changes after weGAN training are statistically significant at a INLINEFORM0 significance level. Because the Rand index captures matching accuracy, we observe from the Table 1 that weGAN tends to improve both metrics." ], "highlighted_evidence": [ "Performing the Welch's t-test, both changes after weGAN training are statistically significant at a INLINEFORM0 significance level. " ] } ], "annotation_id": [ "1eb616b54e8fc48a4d3377a611d6e227755c5035" ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "CNN, TIME, 20 Newsgroups, and Reuters-21578" ], "yes_no": null, "free_form_answer": "", "evidence": [ "In the experiments, we consider four data sets, two of them newly created and the remaining two already public: CNN, TIME, 20 Newsgroups, and Reuters-21578. The code and the two new data sets are available at github.com/baiyangwang/emgan. For the pre-processing of all the documents, we transformed all characters to lower case, stemmed the documents, and ran the word2vec model on each corpora to obtain word embeddings with a size of 300. In all subsequent models, we only consider the most frequent INLINEFORM0 words across all corpora in a data set." ], "highlighted_evidence": [ "In the experiments, we consider four data sets, two of them newly created and the remaining two already public: CNN, TIME, 20 Newsgroups, and Reuters-21578. The code and the two new data sets are available at github.com/baiyangwang/emgan. For the pre-processing of all the documents, we transformed all characters to lower case, stemmed the documents, and ran the word2vec model on each corpora to obtain word embeddings with a size of 300. In all subsequent models, we only consider the most frequent INLINEFORM0 words across all corpora in a data set." ] } ], "annotation_id": [ "c8550faf7a4f5b35e2b85a8b98bf0de7ec3d8473" ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] } ] } 2001.00137 Stacked DeBERT: All Attention in Incomplete Data for Text Classification In this paper, we propose Stacked DeBERT, short for Stacked Denoising Bidirectional Encoder Representations from Transformers. This novel model improves robustness in incomplete data, when compared to existing systems, by designing a novel encoding scheme in BERT, a powerful language representation model solely based on attention mechanisms. Incomplete data in natural language processing refer to text with missing or incorrect words, and its presence can hinder the performance of current models that were not implemented to withstand such noises, but must still perform well even under duress. This is due to the fact that current approaches are built for and trained with clean and complete data, and thus are not able to extract features that can adequately represent incomplete data. Our proposed approach consists of obtaining intermediate input representations by applying an embedding layer to the input tokens followed by vanilla transformers. These intermediate features are given as input to novel denoising transformers which are responsible for obtaining richer input representations. The proposed approach takes advantage of stacks of multilayer perceptrons for the reconstruction of missing words' embeddings by extracting more abstract and meaningful hidden feature vectors, and bidirectional transformers for improved embedding representation. We consider two datasets for training and evaluation: the Chatbot Natural Language Understanding Evaluation Corpus and Kaggle's Twitter Sentiment Corpus. Our model shows improved F1-scores and better robustness in informal/incorrect texts present in tweets and in texts with Speech-to-Text error in the sentiment and intent classification tasks. { "section_name": [ "Introduction", "Proposed model", "Dataset ::: Twitter Sentiment Classification", "Dataset ::: Intent Classification from Text with STT Error", "Experiments ::: Baseline models", "Experiments ::: Baseline models ::: NLU service platforms", "Experiments ::: Baseline models ::: Semantic hashing with classifier", "Experiments ::: Training specifications", "Experiments ::: Training specifications ::: NLU service platforms", "Experiments ::: Training specifications ::: Semantic hashing with classifier", "Experiments ::: Training specifications ::: BERT", "Experiments ::: Training specifications ::: Stacked DeBERT", "Experiments ::: Results on Sentiment Classification from Incorrect Text", "Experiments ::: Results on Intent Classification from Text with STT Error", "Conclusion", "Acknowledgments" ], "paragraphs": [ [ "Understanding a user's intent and sentiment is of utmost importance for current intelligent chatbots to respond appropriately to human requests. However, current systems are not able to perform to their best capacity when presented with incomplete data, meaning sentences with missing or incorrect words. This scenario is likely to happen when one considers human error done in writing. In fact, it is rather naive to assume that users will always type fully grammatically correct sentences. Panko BIBREF0 goes as far as claiming that human accuracy regarding research paper writing is none when considering the entire document. This has been aggravated with the advent of internet and social networks, which allowed language and modern communication to be been rapidly transformed BIBREF1, BIBREF2. Take Twitter for instance, where information is expected to be readily communicated in short and concise sentences with little to no regard to correct sentence grammar or word spelling BIBREF3.", "Further motivation can be found in Automatic Speech Recognition (ASR) applications, where high error rates prevail and pose an enormous hurdle in the broad adoption of speech technology by users worldwide BIBREF4. This is an important issue to tackle because, in addition to more widespread user adoption, improving Speech-to-Text (STT) accuracy diminishes error propagation to modules using the recognized text. With that in mind, in order for current systems to improve the quality of their services, there is a need for development of robust intelligent systems that are able to understand a user even when faced with incomplete representation in language.", "The advancement of deep neural networks have immensely aided in the development of the Natural Language Processing (NLP) domain. Tasks such as text generation, sentence correction, image captioning and text classification, have been possible via models such as Convolutional Neural Networks and Recurrent Neural Networks BIBREF5, BIBREF6, BIBREF7. More recently, state-of-the-art results have been achieved with attention models, more specifically Transformers BIBREF8. Surprisingly, however, there is currently no research on incomplete text classification in the NLP community. Realizing the need of research in that area, we make it the focus of this paper. In this novel task, the model aims to identify the user's intent or sentiment by analyzing a sentence with missing and/or incorrect words. In the sentiment classification task, the model aims to identify the user's sentiment given a tweet, written in informal language and without regards for sentence correctness.", "Current approaches for Text Classification tasks focus on efficient embedding representations. Kim et al. BIBREF9 use semantically enriched word embeddings to make synonym and antonym word vectors respectively more and less similar in order to improve intent classification performance. Devlin et al. BIBREF10 propose Bidirectional Encoder Representations from Transformers (BERT), a powerful bidirectional language representation model based on Transformers, achieving state-of-the-art results on eleven NLP tasks BIBREF11, including sentiment text classification. Concurrently, Shridhar et al. BIBREF12 also reach state of the art in the intent recognition task using Semantic Hashing for feature representation followed by a neural classifier. All aforementioned approaches are, however, applied to datasets based solely on complete data.", "The incomplete data problem is usually approached as a reconstruction or imputation task and is most often related to missing numbers imputation BIBREF13. Vincent et al. BIBREF14, BIBREF15 propose to reconstruct clean data from its noisy version by mapping the input to meaningful representations. This approach has also been shown to outperform other models, such as predictive mean matching, random forest, Support Vector Machine (SVM) and Multiple imputation by Chained Equations (MICE), at missing data imputation tasks BIBREF16, BIBREF17. Researchers in those two areas have shown that meaningful feature representation of data is of utter importance for high performance achieving methods. We propose a model that combines the power of BERT in the NLP domain and the strength of denoising strategies in incomplete data reconstruction to tackle the tasks of incomplete intent and sentiment classification. This enables the implementation of a novel encoding scheme, more robust to incomplete data, called Stacked Denoising BERT or Stacked DeBERT. Our approach consists of obtaining richer input representations from input tokens by stacking denoising transformers on an embedding layer with vanilla transformers. The embedding layer and vanilla transformers extract intermediate input features from the input tokens, and the denoising transformers are responsible for obtaining richer input representations from them. By improving BERT with stronger denoising abilities, we are able to reconstruct missing and incorrect words' embeddings and improve classification accuracy. To summarize, our contribution is two-fold:", "Novel model architecture that is more robust to incomplete data, including missing or incorrect words in text.", "Proposal of the novel tasks of incomplete intent and sentiment classification from incorrect sentences, and release of corpora related with these tasks.", "The remainder of this paper is organized in four sections, with Section SECREF2 explaining the proposed model. This is followed by Section SECREF3 which includes a detailed description of the dataset used for training and evaluation purposes and how it was obtained. Section SECREF4 covers the baseline models used for comparison, training specifications and experimental results. Finally, Section SECREF5 wraps up this paper with conclusion and future works." ], [ "We propose Stacked Denoising BERT (DeBERT) as a novel encoding scheming for the task of incomplete intent classification and sentiment classification from incorrect sentences, such as tweets and text with STT error. The proposed model, illustrated in Fig. FIGREF4, is structured as a stacking of embedding layers and vanilla transformer layers, similarly to the conventional BERT BIBREF10, followed by layers of novel denoising transformers. The main purpose of this model is to improve the robustness and efficiency of BERT when applied to incomplete data by reconstructing hidden embeddings from sentences with missing words. By reconstructing these hidden embeddings, we are able to improve the encoding scheme in BERT.", "The initial part of the model is the conventional BERT, a multi-layer bidirectional Transformer encoder and a powerful language model. During training, BERT is fine-tuned on the incomplete text classification corpus (see Section SECREF3). The first layer pre-processes the input sentence by making it lower-case and by tokenizing it. It also prefixes the sequence of tokens with a special character [CLS]' and sufixes each sentence with a [SEP]' character. It is followed by an embedding layer used for input representation, with the final input embedding being a sum of token embedddings, segmentation embeddings and position embeddings. The first one, token embedding layer, uses a vocabulary dictionary to convert each token into a more representative embedding. The segmentation embedding layer indicates which tokens constitute a sentence by signaling either 1 or 0. In our case, since our data are formed of single sentences, the segment is 1 until the first [SEP]' character appears (indicating segment A) and then it becomes 0 (segment B). The position embedding layer, as the name indicates, adds information related to the token's position in the sentence. This prepares the data to be considered by the layers of vanilla bidirectional transformers, which outputs a hidden embedding that can be used by our novel layers of denoising transformers.", "Although BERT has shown to perform better than other baseline models when handling incomplete data, it is still not enough to completely and efficiently handle such data. Because of that, there is a need for further improvement of the hidden feature vectors obtained from sentences with missing words. With this purpose in mind, we implement a novel encoding scheme consisting of denoising transformers, which is composed of stacks of multilayer perceptrons for the reconstruction of missing words’ embeddings by extracting more abstract and meaningful hidden feature vectors, and bidirectional transformers for improved embedding representation. The embedding reconstruction step is trained on sentence embeddings extracted from incomplete data$h_{inc}$as input and embeddings corresponding to its complete version$h_{comp}$as target. Both input and target are obtained after applying the embedding layers and the vanilla transformers, as indicated in Fig. FIGREF4, and have shape$(N_{bs}, 768, 128)$, where$N_{bs}$is the batch size, 768 is the original BERT embedding size for a single token, and 128 is the maximum sequence length in a sentence.", "The stacks of multilayer perceptrons are structured as two sets of three layers with two hidden layers each. The first set is responsible for compressing the$h_{inc}$into a latent-space representation, extracting more abstract features into lower dimension vectors$z_1$,$z_2$and$\\mathbf {z}$with shape$(N_{bs}, 128, 128)$,$(N_{bs}, 32, 128)$, and$(N_{bs}, 12, 128)$, respectively. This process is shown in Eq. (DISPLAY_FORM5):", "where$f(\\cdot )$is the parameterized function mapping$h_{inc}$to the hidden state$\\mathbf {z}$. The second set then respectively reconstructs$z_1$,$z_2$and$\\mathbf {z}$into$h_{rec_1}$,$h_{rec_2}$and$h_{rec}$. This process is shown in Eq. (DISPLAY_FORM6):", "where$g(\\cdot )$is the parameterized function that reconstructs$\\mathbf {z}$as$h_{rec}$.", "The reconstructed hidden sentence embedding$h_{rec}$is compared with the complete hidden sentence embedding$h_{comp}$through a mean square error loss function, as shown in Eq. (DISPLAY_FORM7):", "After reconstructing the correct hidden embeddings from the incomplete sentences, the correct hidden embeddings are given to bidirectional transformers to generate input representations. The model is then fine-tuned in an end-to-end manner on the incomplete text classification corpus.", "Classification is done with a feedforward network and softmax activation function. Softmax$\\sigma $is a discrete probability distribution function for$N_C$classes, with the sum of the classes probability being 1 and the maximum value being the predicted class. The predicted class can be mathematically calculated as in Eq. (DISPLAY_FORM8):", "where$o = W t + b$, the output of the feedforward layer used for classification." ], [ "In order to evaluate the performance of our model, we need access to a naturally noisy dataset with real human errors. Poor quality texts obtained from Twitter, called tweets, are then ideal for our task. For this reason, we choose Kaggle's two-class Sentiment140 dataset BIBREF18, which consists of spoken text being used in writing and without strong consideration for grammar or sentence correctness. Thus, it has many mistakes, as specified in Table TABREF11.", "Even though this corpus has incorrect sentences and their emotional labels, they lack their respective corrected sentences, necessary for the training of our model. In order to obtain this missing information, we outsource native English speakers from an unbiased and anonymous platform, called Amazon Mechanical Turk (MTurk) BIBREF19, which is a paid marketplace for Human Intelligence Tasks (HITs). We use this platform to create tasks for native English speakers to format the original incorrect tweets into correct sentences. Some examples are shown in Table TABREF12.", "After obtaining the correct sentences, our two-class dataset has class distribution as shown in Table TABREF14. There are 200 sentences used in the training stage, with 100 belonging to the positive sentiment class and 100 to the negative class, and 50 samples being used in the evaluation stage, with 25 negative and 25 positive. This totals in 300 samples, with incorrect and correct sentences combined. Since our goal is to evaluate the model's performance and robustness in the presence of noise, we only consider incorrect data in the testing phase. Note that BERT is a pre-trained model, meaning that small amounts of data are enough for appropriate fine-tuning." ], [ "In the intent classification task, we are presented with a corpus that suffers from the opposite problem of the Twitter sentiment classification corpus. In the intent classification corpus, we have the complete sentences and intent labels but lack their corresponding incomplete sentences, and since our task revolves around text classification in incomplete or incorrect data, it is essential that we obtain this information. To remedy this issue, we apply a Text-to-Speech (TTS) module followed by a Speech-to-Text (STT) module to the complete sentences in order to obtain incomplete sentences with STT error. Due to TTS and STT modules available being imperfect, the resulting sentences have a reasonable level of noise in the form of missing or incorrectly transcribed words. Analysis on this dataset adds value to our work by enabling evaluation of our model's robustness to different rates of data incompleteness.", "The dataset used to evaluate the models' performance is the Chatbot Natural Language Unerstanding (NLU) Evaluation Corpus, introduced by Braun et al. BIBREF20 to test NLU services. It is a publicly available benchmark and is composed of sentences obtained from a German Telegram chatbot used to answer questions about public transport connections. The dataset has two intents, namely Departure Time and Find Connection with 100 train and 106 test samples, shown in Table TABREF18. Even though English is the main language of the benchmark, this dataset contains a few German station and street names.", "The incomplete dataset used for training is composed of lower-cased incomplete data obtained by manipulating the original corpora. The incomplete sentences with STT error are obtained in a 2-step process shown in Fig. FIGREF22. The first step is to apply a TTS module to the available complete sentence. Here, we apply gtts , a Google Text-to-Speech python library, and macsay , a terminal command available in Mac OS as say. The second step consists of applying an STT module to the obtained audio files in order to obtain text containing STT errors. The STT module used here was witai , freely available and maintained by Wit.ai. The mentioned TTS and STT modules were chosen according to code availability and whether it's freely available or has high daily usage limitations.", "Table TABREF24 exemplifies a complete and its respective incomplete sentences with different TTS-STT combinations, thus varying rates of missing and incorrect words. The level of noise in the STT imbued sentences is denoted by a inverted BLEU (iBLEU) score ranging from 0 to 1. The inverted BLEU score is denoted in Eq. (DISPLAY_FORM23):", "where BLEU is a common metric usually used in machine translation tasks BIBREF21. We decide to showcase that instead of regular BLEU because it is more indicative to the amount of noise in the incomplete text, where the higher the iBLEU, the higher the noise." ], [ "Besides the already mentioned BERT, the following baseline models are also used for comparison." ], [ "We focus on the three following services, where the first two are commercial services and last one is open source with two separate backends: Google Dialogflow (formerly Api.ai) , SAP Conversational AI (formerly Recast.ai) and Rasa (spacy and tensorflow backend) ." ], [ "Shridhar et al. BIBREF12 proposed a word embedding method that doesn't suffer from out-of-vocabulary issues. The authors achieve this by using hash tokens in the alphabet instead of a single word, making it vocabulary independent. For classification, classifiers such as Multilayer Perceptron (MLP), Support Vector Machine (SVM) and Random Forest are used. A complete list of classifiers and training specifications are given in Section SECREF31." ], [ "The baseline and proposed models are each trained 3 separate times for the incomplete intent classification task: complete data and one for each of the TTS-STT combinations (gtts-witai and macsay-witai). Regarding the sentiment classification from incorrect sentences task, the baseline and proposed models are each trained 3 times: original text, corrected text and incorrect with correct texts. The reported F1 scores are the best accuracies obtained from 10 runs." ], [ "No settable training configurations available in the online platforms." ], [ "Trained on 3-gram, feature vector size of 768 as to match the BERT embedding size, and 13 classifiers with parameters set as specified in the authors' paper so as to allow comparison: MLP with 3 hidden layers of sizes$[300, 100, 50]$respectively; Random Forest with 50 estimators or trees; 5-fold Grid Search with Random Forest classifier and estimator$([50, 60, 70]$; Linear Support Vector Classifier with L1 and L2 penalty and tolerance of$10^{-3}$; Regularized linear classifier with Stochastic Gradient Descent (SGD) learning with regularization term$alpha=10^{-4}$and L1, L2 and Elastic-Net penalty; Nearest Centroid with Euclidian metric, where classification is done by representing each class with a centroid; Bernoulli Naive Bayes with smoothing parameter$alpha=10^{-2}$; K-means clustering with 2 clusters and L2 penalty; and Logistic Regression classifier with L2 penalty, tolerance of$10^{-4}$and regularization term of$1.0$. Most often, the best performing classifier was MLP." ], [ "Conventional BERT is a BERT-base-uncased model, meaning that it has 12 transformer blocks$L$, hidden size$H$of 768, and 12 self-attention heads$A$. The model is fine-tuned with our dataset on 2 Titan X GPUs for 3 epochs with Adam Optimizer, learning rate of$2*10^{-5}$, maximum sequence length of 128, and warm up proportion of$0.1$. The train batch size is 4 for the Twitter Sentiment Corpus and 8 for the Chatbot Intent Classification Corpus." ], [ "Our proposed model is trained in end-to-end manner on 2 Titan X GPUs, with training time depending on the size of the dataset and train batch size. The stack of multilayer perceptrons are trained for 100 and 1,000 epochs with Adam Optimizer, learning rate of$10^{-3}$, weight decay of$10^{-5}$, MSE loss criterion and batch size the same as BERT (4 for the Twitter Sentiment Corpus and 8 for the Chatbot Intent Classification Corpus)." ], [ "Experimental results for the Twitter Sentiment Classification task on Kaggle's Sentiment140 Corpus dataset, displayed in Table TABREF37, show that our model has better F1-micros scores, outperforming the baseline models by 6$\\%$to 8$\\%$. We evaluate our model and baseline models on three versions of the dataset. The first one (Inc) only considers the original data, containing naturally incorrect tweets, and achieves accuracy of 80$\\%$against BERT's 72$\\%$. The second version (Corr) considers the corrected tweets, and shows higher accuracy given that it is less noisy. In that version, Stacked DeBERT achieves 82$\\%$accuracy against BERT's 76$\\%$, an improvement of 6$\\%$. In the last case (Inc+Corr), we consider both incorrect and correct tweets as input to the models in hopes of improving performance. However, the accuracy was similar to the first aforementioned version, 80$\\%$for our model and 74$\\%$for the second highest performing model. Since the first and last corpus gave similar performances with our model, we conclude that the Twitter dataset does not require complete sentences to be given as training input, in addition to the original naturally incorrect tweets, in order to better model the noisy sentences.", "In addition to the overall F1-score, we also present a confusion matrix, in Fig. FIGREF38, with the per-class F1-scores for BERT and Stacked DeBERT. The normalized confusion matrix plots the predicted labels versus the target/target labels. Similarly to Table TABREF37, we evaluate our model with the original Twitter dataset, the corrected version and both original and corrected tweets. It can be seen that our model is able to improve the overall performance by improving the accuracy of the lower performing classes. In the Inc dataset, the true class 1 in BERT performs with approximately 50%. However, Stacked DeBERT is able to improve that to 72%, although to a cost of a small decrease in performance of class 0. A similar situation happens in the remaining two datasets, with improved accuracy in class 0 from 64% to 84% and 60% to 76% respectively." ], [ "Experimental results for the Intent Classification task on the Chatbot NLU Corpus with STT error can be seen in Table TABREF40. When presented with data containing STT error, our model outperforms all baseline models in both combinations of TTS-STT: gtts-witai outperforms the second placing baseline model by 0.94% with F1-score of 97.17%, and macsay-witai outperforms the next highest achieving model by 1.89% with F1-score of 96.23%.", "The table also indicates the level of noise in each dataset with the already mentioned iBLEU score, where 0 means no noise and higher values mean higher quantity of noise. As expected, the models' accuracy degrade with the increase in noise, thus F1-scores of gtts-witai are higher than macsay-witai. However, while the other models decay rapidly in the presence of noise, our model does not only outperform them but does so with a wider margin. This is shown with the increasing robustness curve in Fig. FIGREF41 and can be demonstrated by macsay-witai outperforming the baseline models by twice the gap achieved by gtts-witai.", "Further analysis of the results in Table TABREF40 show that, BERT decay is almost constant with the addition of noise, with the difference between the complete data and gtts-witai being 1.88 and gtts-witai and macsay-witai being 1.89. Whereas in Stacked DeBERT, that difference is 1.89 and 0.94 respectively. This is stronger indication of our model's robustness in the presence of noise.", "Additionally, we also present Fig. FIGREF42 with the normalized confusion matrices for BERT and Stacked DeBERT for sentences containing STT error. Analogously to the Twitter Sentiment Classification task, the per-class F1-scores show that our model is able to improve the overall performance by improving the accuracy of one class while maintaining the high-achieving accuracy of the second one." ], [ "In this work, we proposed a novel deep neural network, robust to noisy text in the form of sentences with missing and/or incorrect words, called Stacked DeBERT. The idea was to improve the accuracy performance by improving the representation ability of the model with the implementation of novel denoising transformers. More specifically, our model was able to reconstruct hidden embeddings from their respective incomplete hidden embeddings. Stacked DeBERT was compared against three NLU service platforms and two other machine learning methods, namely BERT and Semantic Hashing with neural classifier. Our model showed better performance when evaluated on F1 scores in both Twitter sentiment and intent text with STT error classification tasks. The per-class F1 score was also evaluated in the form of normalized confusion matrices, showing that our model was able to improve the overall performance by better balancing the accuracy of each class, trading-off small decreases in high achieving class for significant improvements in lower performing ones. In the Chatbot dataset, accuracy improvement was achieved even without trade-off, with the highest achieving classes maintaining their accuracy while the lower achieving class saw improvement. Further evaluation on the F1-scores decay in the presence of noise demonstrated that our model is more robust than the baseline models when considering noisy data, be that in the form of incorrect sentences or sentences with STT error. Not only that, experiments on the Twitter dataset also showed improved accuracy in clean data, with complete sentences. We infer that this is due to our model being able to extract richer data representations from the input data regardless of the completeness of the sentence. For future works, we plan on evaluating the robustness of our model against other types of noise, such as word reordering, word insertion, and spelling mistakes in sentences. In order to improve the performance of our model, further experiments will be done in search for more appropriate hyperparameters and more complex neural classifiers to substitute the last feedforward network layer." ], [ "This work was partly supported by Institute of Information & Communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (2016-0-00564, Development of Intelligent Interaction Technology Based on Context Awareness and Human Intention Understanding) and Korea Evaluation Institute of Industrial Technology (KEIT) grant funded by the Korea government (MOTIE) (50%) and the Technology Innovation Program: Industrial Strategic Technology Development Program (No: 10073162) funded By the Ministry of Trade, Industry & Energy (MOTIE, Korea) (50%)." ] ] } { "question": [ "Do they report results only on English datasets?", "How do the authors define or exemplify 'incorrect words'?", "How many vanilla transformers do they use after applying an embedding layer?", "Do they test their approach on a dataset without incomplete data?", "Should their approach be applied only when dealing with incomplete data?", "By how much do they outperform other models in the sentiment in intent classification tasks?" ], "question_id": [ "637aa32a34b20b4b0f1b5dfa08ef4e0e5ed33d52", "4b8257cdd9a60087fa901da1f4250e7d910896df", "7e161d9facd100544fa339b06f656eb2fc64ed28", "abc5836c54fc2ac8465aee5a83b9c0f86c6fd6f5", "4debd7926941f1a02266b1a7be2df8ba6e79311a", "3b745f086fb5849e7ce7ce2c02ccbde7cfdedda5" ], "nlp_background": [ "five", "five", "infinity", "infinity", "infinity", "infinity" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar" ], "paper_read": [ "no", "no", "no", "no", "no", "no" ], "search_query": [ "twitter", "twitter", "", "", "", "" ], "question_writer": [ "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": true, "free_form_answer": "", "evidence": [ "Even though this corpus has incorrect sentences and their emotional labels, they lack their respective corrected sentences, necessary for the training of our model. In order to obtain this missing information, we outsource native English speakers from an unbiased and anonymous platform, called Amazon Mechanical Turk (MTurk) BIBREF19, which is a paid marketplace for Human Intelligence Tasks (HITs). We use this platform to create tasks for native English speakers to format the original incorrect tweets into correct sentences. Some examples are shown in Table TABREF12.", "The dataset used to evaluate the models' performance is the Chatbot Natural Language Unerstanding (NLU) Evaluation Corpus, introduced by Braun et al. BIBREF20 to test NLU services. It is a publicly available benchmark and is composed of sentences obtained from a German Telegram chatbot used to answer questions about public transport connections. The dataset has two intents, namely Departure Time and Find Connection with 100 train and 106 test samples, shown in Table TABREF18. Even though English is the main language of the benchmark, this dataset contains a few German station and street names." ], "highlighted_evidence": [ "Even though this corpus has incorrect sentences and their emotional labels, they lack their respective corrected sentences, necessary for the training of our model. In order to obtain this missing information, we outsource native English speakers from an unbiased and anonymous platform, called Amazon Mechanical Turk (MTurk) BIBREF19, which is a paid marketplace for Human Intelligence Tasks (HITs). We use this platform to create tasks for native English speakers to format the original incorrect tweets into correct sentences. Some examples are shown in Table TABREF12.", "The dataset used to evaluate the models' performance is the Chatbot Natural Language Unerstanding (NLU) Evaluation Corpus, introduced by Braun et al. BIBREF20 to test NLU services. It is a publicly available benchmark and is composed of sentences obtained from a German Telegram chatbot used to answer questions about public transport connections. The dataset has two intents, namely Departure Time and Find Connection with 100 train and 106 test samples, shown in Table TABREF18. Even though English is the main language of the benchmark, this dataset contains a few German station and street names." ] } ], "annotation_id": [ "c7a83f3225e54b6306ef3372507539e471c155d0" ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "typos in spellings or ungrammatical words", "evidence": [ "Understanding a user's intent and sentiment is of utmost importance for current intelligent chatbots to respond appropriately to human requests. However, current systems are not able to perform to their best capacity when presented with incomplete data, meaning sentences with missing or incorrect words. This scenario is likely to happen when one considers human error done in writing. In fact, it is rather naive to assume that users will always type fully grammatically correct sentences. Panko BIBREF0 goes as far as claiming that human accuracy regarding research paper writing is none when considering the entire document. This has been aggravated with the advent of internet and social networks, which allowed language and modern communication to be been rapidly transformed BIBREF1, BIBREF2. Take Twitter for instance, where information is expected to be readily communicated in short and concise sentences with little to no regard to correct sentence grammar or word spelling BIBREF3." ], "highlighted_evidence": [ "Understanding a user's intent and sentiment is of utmost importance for current intelligent chatbots to respond appropriately to human requests. However, current systems are not able to perform to their best capacity when presented with incomplete data, meaning sentences with missing or incorrect words. This scenario is likely to happen when one considers human error done in writing. In fact, it is rather naive to assume that users will always type fully grammatically correct sentences. " ] } ], "annotation_id": [ "7c44e07bb8f2884cd73dd023e86dfeb7241e999c" ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "answer": [ { "unanswerable": true, "extractive_spans": [], "yes_no": null, "free_form_answer": "", "evidence": [], "highlighted_evidence": [] } ], "annotation_id": [ "6d5e9774c1d04b3cac91fcc7ac9fd6ff56d9bc63" ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": false, "free_form_answer": "", "evidence": [ "The incomplete dataset used for training is composed of lower-cased incomplete data obtained by manipulating the original corpora. The incomplete sentences with STT error are obtained in a 2-step process shown in Fig. FIGREF22. The first step is to apply a TTS module to the available complete sentence. Here, we apply gtts , a Google Text-to-Speech python library, and macsay , a terminal command available in Mac OS as say. The second step consists of applying an STT module to the obtained audio files in order to obtain text containing STT errors. The STT module used here was witai , freely available and maintained by Wit.ai. The mentioned TTS and STT modules were chosen according to code availability and whether it's freely available or has high daily usage limitations." ], "highlighted_evidence": [ "The incomplete dataset used for training is composed of lower-cased incomplete data obtained by manipulating the original corpora." ] }, { "unanswerable": false, "extractive_spans": [], "yes_no": false, "free_form_answer": "", "evidence": [ "In order to evaluate the performance of our model, we need access to a naturally noisy dataset with real human errors. Poor quality texts obtained from Twitter, called tweets, are then ideal for our task. For this reason, we choose Kaggle's two-class Sentiment140 dataset BIBREF18, which consists of spoken text being used in writing and without strong consideration for grammar or sentence correctness. Thus, it has many mistakes, as specified in Table TABREF11.", "In the intent classification task, we are presented with a corpus that suffers from the opposite problem of the Twitter sentiment classification corpus. In the intent classification corpus, we have the complete sentences and intent labels but lack their corresponding incomplete sentences, and since our task revolves around text classification in incomplete or incorrect data, it is essential that we obtain this information. To remedy this issue, we apply a Text-to-Speech (TTS) module followed by a Speech-to-Text (STT) module to the complete sentences in order to obtain incomplete sentences with STT error. Due to TTS and STT modules available being imperfect, the resulting sentences have a reasonable level of noise in the form of missing or incorrectly transcribed words. Analysis on this dataset adds value to our work by enabling evaluation of our model's robustness to different rates of data incompleteness." ], "highlighted_evidence": [ "For this reason, we choose Kaggle's two-class Sentiment140 dataset BIBREF18, which consists of spoken text being used in writing and without strong consideration for grammar or sentence correctness. Thus, it has many mistakes, as specified in Table TABREF11.", "In the intent classification corpus, we have the complete sentences and intent labels but lack their corresponding incomplete sentences, and since our task revolves around text classification in incomplete or incorrect data, it is essential that we obtain this information. To remedy this issue, we apply a Text-to-Speech (TTS) module followed by a Speech-to-Text (STT) module to the complete sentences in order to obtain incomplete sentences with STT error. Due to TTS and STT modules available being imperfect, the resulting sentences have a reasonable level of noise in the form of missing or incorrectly transcribed words." ] } ], "annotation_id": [ "b36dcc41db3d7aa7503fe85cbc1793b27473e4ed", "f0dd380c67caba4c7c3fe0ee9b8185f4923ed868" ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4", "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": false, "free_form_answer": "", "evidence": [ "Experimental results for the Twitter Sentiment Classification task on Kaggle's Sentiment140 Corpus dataset, displayed in Table TABREF37, show that our model has better F1-micros scores, outperforming the baseline models by 6$\\%$to 8$\\%$. We evaluate our model and baseline models on three versions of the dataset. The first one (Inc) only considers the original data, containing naturally incorrect tweets, and achieves accuracy of 80$\\%$against BERT's 72$\\%$. The second version (Corr) considers the corrected tweets, and shows higher accuracy given that it is less noisy. In that version, Stacked DeBERT achieves 82$\\%$accuracy against BERT's 76$\\%$, an improvement of 6$\\%$. In the last case (Inc+Corr), we consider both incorrect and correct tweets as input to the models in hopes of improving performance. However, the accuracy was similar to the first aforementioned version, 80$\\%$for our model and 74$\\%$for the second highest performing model. Since the first and last corpus gave similar performances with our model, we conclude that the Twitter dataset does not require complete sentences to be given as training input, in addition to the original naturally incorrect tweets, in order to better model the noisy sentences." ], "highlighted_evidence": [ "We evaluate our model and baseline models on three versions of the dataset. The first one (Inc) only considers the original data, containing naturally incorrect tweets, and achieves accuracy of 80$\\%$against BERT's 72$\\%$. The second version (Corr) considers the corrected tweets, and shows higher accuracy given that it is less noisy. " ] }, { "unanswerable": false, "extractive_spans": [], "yes_no": false, "free_form_answer": "", "evidence": [ "We propose Stacked Denoising BERT (DeBERT) as a novel encoding scheming for the task of incomplete intent classification and sentiment classification from incorrect sentences, such as tweets and text with STT error. The proposed model, illustrated in Fig. FIGREF4, is structured as a stacking of embedding layers and vanilla transformer layers, similarly to the conventional BERT BIBREF10, followed by layers of novel denoising transformers. The main purpose of this model is to improve the robustness and efficiency of BERT when applied to incomplete data by reconstructing hidden embeddings from sentences with missing words. By reconstructing these hidden embeddings, we are able to improve the encoding scheme in BERT." ], "highlighted_evidence": [ "The main purpose of this model is to improve the robustness and efficiency of BERT when applied to incomplete data by reconstructing hidden embeddings from sentences with missing words. " ] } ], "annotation_id": [ "2b4f582794c836ce6cde20b07b5f754cb67f8e20", "c6bacbe8041fdef389e98b119b050cb03cce14e1" ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58", "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "In the sentiment classification task by 6% to 8% and in the intent classification task by 0.94% on average", "evidence": [ "Experimental results for the Twitter Sentiment Classification task on Kaggle's Sentiment140 Corpus dataset, displayed in Table TABREF37, show that our model has better F1-micros scores, outperforming the baseline models by 6$\\%$to 8$\\%$. We evaluate our model and baseline models on three versions of the dataset. The first one (Inc) only considers the original data, containing naturally incorrect tweets, and achieves accuracy of 80$\\%$against BERT's 72$\\%$. The second version (Corr) considers the corrected tweets, and shows higher accuracy given that it is less noisy. In that version, Stacked DeBERT achieves 82$\\%$accuracy against BERT's 76$\\%$, an improvement of 6$\\%$. In the last case (Inc+Corr), we consider both incorrect and correct tweets as input to the models in hopes of improving performance. However, the accuracy was similar to the first aforementioned version, 80$\\%$for our model and 74$\\%$for the second highest performing model. Since the first and last corpus gave similar performances with our model, we conclude that the Twitter dataset does not require complete sentences to be given as training input, in addition to the original naturally incorrect tweets, in order to better model the noisy sentences.", "Experimental results for the Intent Classification task on the Chatbot NLU Corpus with STT error can be seen in Table TABREF40. When presented with data containing STT error, our model outperforms all baseline models in both combinations of TTS-STT: gtts-witai outperforms the second placing baseline model by 0.94% with F1-score of 97.17%, and macsay-witai outperforms the next highest achieving model by 1.89% with F1-score of 96.23%.", "FLOAT SELECTED: Table 7: F1-micro scores for original sentences and sentences imbued with STT error in the Chatbot Corpus. The noise level is represented by the iBLEU score (See Eq. (5))." ], "highlighted_evidence": [ "Experimental results for the Twitter Sentiment Classification task on Kaggle's Sentiment140 Corpus dataset, displayed in Table TABREF37, show that our model has better F1-micros scores, outperforming the baseline models by 6$\\%$to 8$\\%$. ", "Experimental results for the Intent Classification task on the Chatbot NLU Corpus with STT error can be seen in Table TABREF40. When presented with data containing STT error, our model outperforms all baseline models in both combinations of TTS-STT: gtts-witai outperforms the second placing baseline model by 0.94% with F1-score of 97.17%, and macsay-witai outperforms the next highest achieving model by 1.89% with F1-score of 96.23%.", "FLOAT SELECTED: Table 7: F1-micro scores for original sentences and sentences imbued with STT error in the Chatbot Corpus. The noise level is represented by the iBLEU score (See Eq. (5))." ] } ], "annotation_id": [ "1f4a6fce4f78662774735b1e27744f55b0efd7a8" ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] } ] } 1910.03042 Gunrock: A Social Bot for Complex and Engaging Long Conversations Gunrock is the winner of the 2018 Amazon Alexa Prize, as evaluated by coherence and engagement from both real users and Amazon-selected expert conversationalists. We focus on understanding complex sentences and having in-depth conversations in open domains. In this paper, we introduce some innovative system designs and related validation analysis. Overall, we found that users produce longer sentences to Gunrock, which are directly related to users' engagement (e.g., ratings, number of turns). Additionally, users' backstory queries about Gunrock are positively correlated to user satisfaction. Finally, we found dialog flows that interleave facts and personal opinions and stories lead to better user satisfaction. { "section_name": [ "Introduction", "System Architecture", "System Architecture ::: Automatic Speech Recognition", "System Architecture ::: Natural Language Understanding", "System Architecture ::: Dialog Manager", "System Architecture ::: Knowledge Databases", "System Architecture ::: Natural Language Generation", "System Architecture ::: Text To Speech", "Analysis", "Analysis ::: Response Depth: Mean Word Count", "Analysis ::: Gunrock's Backstory and Persona", "Analysis ::: Interleaving Personal and Factual Information: Animal Module", "Conclusion", "Acknowledgments" ], "paragraphs": [ [ "Amazon Alexa Prize BIBREF0 provides a platform to collect real human-machine conversation data and evaluate performance on speech-based social conversational systems. Our system, Gunrock BIBREF1 addresses several limitations of prior chatbots BIBREF2, BIBREF3, BIBREF4 including inconsistency and difficulty in complex sentence understanding (e.g., long utterances) and provides several contributions: First, Gunrock's multi-step language understanding modules enable the system to provide more useful information to the dialog manager, including a novel dialog act scheme. Additionally, the natural language understanding (NLU) module can handle more complex sentences, including those with coreference. Second, Gunrock interleaves actions to elicit users' opinions and provide responses to create an in-depth, engaging conversation; while a related strategy to interleave task- and non-task functions in chatbots has been proposed BIBREF5, no chatbots to our knowledge have employed a fact/opinion interleaving strategy. Finally, we use an extensive persona database to provide coherent profile information, a critical challenge in building social chatbots BIBREF3. Compared to previous systems BIBREF4, Gunrock generates more balanced conversations between human and machine by encouraging and understanding more human inputs (see Table TABREF2 for an example)." ], [ "Figure FIGREF3 provides an overview of Gunrock's architecture. We extend the Amazon Conversational Bot Toolkit (CoBot) BIBREF6 which is a flexible event-driven framework. CoBot provides ASR results and natural language processing pipelines through the Alexa Skills Kit (ASK) BIBREF7. Gunrock corrects ASR according to the context (asr) and creates a natural language understanding (NLU) (nlu) module where multiple components analyze the user utterances. A dialog manager (DM) (dm) uses features from NLU to select topic dialog modules and defines an individual dialog flow. Each dialog module leverages several knowledge bases (knowledge). Then a natural language generation (NLG) (nlg) module generates a corresponding response. Finally, we markup the synthesized responses and return to the users through text to speech (TTS) (tts). While we provide an overview of the system in the following sections, for detailed system implementation details, please see the technical report BIBREF1." ], [ "Gunrock receives ASR results with the raw text and timestep information for each word in the sequence (without case information and punctuation). Keywords, especially named entities such as movie names, are prone to generate ASR errors without contextual information, but are essential for NLU and NLG. Therefore, Gunrock uses domain knowledge to correct these errors by comparing noun phrases to a knowledge base (e.g. a list of the most popular movies names) based on their phonetic information. We extract the primary and secondary code using The Double Metaphone Search Algorithm BIBREF8 for noun phrases (extracted by noun trunks) and the selected knowledge base, and suggest a potential fix by code matching. An example can be seen in User_3 and Gunrock_3 in Table TABREF2." ], [ "Gunrock is designed to engage users in deeper conversation; accordingly, a user utterance can consist of multiple units with complete semantic meanings. We first split the corrected raw ASR text into sentences by inserting break tokens. An example is shown in User_3 in Table TABREF2. Meanwhile, we mask named entities before segmentation so that a named entity will not be segmented into multiple parts and an utterance with a complete meaning is maintained (e.g.,“i like the movie a star is born\"). We also leverage timestep information to filter out false positive corrections. After segmentation, our coreference implementation leverages entity knowledge (such as person versus event) and replaces nouns with their actual reference by entity ranking. We implement coreference resolution on entities both within segments in a single turn as well as across multiple turns. For instance, “him\" in the last segment in User_5 is replaced with “bradley cooper\" in Table TABREF2. Next, we use a constituency parser to generate noun phrases from each modified segment. Within the sequence pipeline to generate complete segments, Gunrock detects (1) topic, (2) named entities, and (3) sentiment using ASK in parallel. The NLU module uses knowledge graphs including Google Knowledge Graph to call for a detailed description of each noun phrase for understanding.", "In order to extract the intent for each segment, we designed MIDAS, a human-machine dialog act scheme with 23 tags and implemented a multi-label dialog act classification model using contextual information BIBREF9. Next, the NLU components analyzed on each segment in a user utterance are sent to the DM and NLG module for state tracking and generation, respectively." ], [ "We implemented a hierarchical dialog manager, consisting of a high level and low level DMs. The former leverages NLU outputs for each segment and selects the most important segment for the system as the central element using heuristics. For example, “i just finished reading harry potter,\" triggers Sub-DM: Books. Utilizing the central element and features extracted from NLU, input utterances are mapped onto 11 possible topic dialog modules (e.g., movies, books, animals, etc.), including a backup module, retrieval.", "Low level dialog management is handled by the separate topic dialog modules, which use modular finite state transducers to execute various dialog segments processed by the NLU. Using topic-specific modules enables deeper conversations that maintain the context. We design dialog flows in each of the finite state machines, as well. Dialog flow is determined by rule-based transitions between a specified fixed set of dialog states. To ensure that our states and transitions are effective, we leverage large scale user data to find high probability responses and high priority responses to handle in different contexts. Meanwhile, dialog flow is customized to each user by tracking user attributes as dialog context. In addition, each dialog flow is adaptive to user responses to show acknowledgement and understanding (e.g., talking about pet ownership in the animal module). Based on the user responses, many dialog flow variations exist to provide a fresh experience each time. This reduces the feeling of dialogs being scripted and repetitive. Our dialog flows additionally interleave facts, opinions, experiences, and questions to make the conversation flexible and interesting.", "In the meantime, we consider feedback signals such as “continue\" and “stop\" from the current topic dialog module, indicating whether it is able to respond to the following request in the dialog flow, in order to select the best response module. Additionally, in all modules we allow mixed-initiative interactions; users can trigger a new dialog module when they want to switch topics while in any state. For example, users can start a new conversation about movies from any other topic module." ], [ "All topic dialog modules query knowledge bases to provide information to the user. To respond to general factual questions, Gunrock queries the EVI factual database , as well as other up-to-date scraped information appropriate for the submodule, such as news and current showing movies in a specific location from databases including IMDB. One contribution of Gunrock is the extensive Gunrock Persona Backstory database, consisting of over 1,000 responses to possible questions for Gunrock as well as reasoning for her responses for roughly 250 questions (see Table 2). We designed the system responses to elicit a consistent personality within and across modules, modeled as a female individual who is positive, outgoing, and is interested in science and technology." ], [ "In order to avoid repetitive and non-specific responses commonly seen in dialog systems BIBREF10, Gunrock uses a template manager to select from a handcrafted response templates based on the dialog state. One dialog state can map to multiple response templates with similar semantic or functional content but differing surface forms. Among these response templates for the same dialog state, one is randomly selected without repetition to provide variety unless all have been exhausted. When a response template is selected, any slots are substituted with actual contents, including queried information for news and specific data for weather. For example, to ground a movie name due to ASR errors or multiple versions, one template is “Are you talking about {movie_title} released in {release_year} starring {actor_name} as {actor_role}?\". Module-specific templates were generated for each topic (e.g., animals), but some of the templates are generalizable across different modules (e.g., “What’s your favorite [movie$|$book$|$place to visit]?\")", "In many cases, response templates corresponding to different dialog acts are dynamically composed to give the final response. For example, an appropriate acknowledgement for the user’s response can be combined with a predetermined follow-up question." ], [ "After NLG, we adjust the TTS of the system to improve the expressiveness of the voice to convey that the system is an engaged and active participant in the conversation. We use a rule-based system to systematically add interjections, specifically Alexa Speechcons, and fillers to approximate human-like cognitive-emotional expression BIBREF11. For more on the framework and analysis of the TTS modifications, see BIBREF12." ], [ "From January 5, 2019 to March 5, 2019, we collected conversational data for Gunrock. During this time, no other code updates occurred. We analyzed conversations for Gunrock with at least 3 user turns to avoid conversations triggered by accident. Overall, this resulted in a total of 34,432 user conversations. Together, these users gave Gunrock an average rating of 3.65 (median: 4.0), which was elicited at the end of the conversation (“On a scale from 1 to 5 stars, how do you feel about talking to this socialbot again?\"). Users engaged with Gunrock for an average of 20.92 overall turns (median 13.0), with an average of 6.98 words per utterance, and had an average conversation time of 7.33 minutes (median: 2.87 min.). We conducted three principal analyses: users' response depth (wordcount), backstory queries (backstorypersona), and interleaving of personal and factual responses (pets)." ], [ "Two unique features of Gunrock are its ability to dissect longer, complex sentences, and its methods to encourage users to be active conversationalists, elaborating on their responses. In prior work, even if users are able to drive the conversation, often bots use simple yes/no questions to control the conversational flow to improve understanding; as a result, users are more passive interlocutors in the conversation. We aimed to improve user engagement by designing the conversation to have more open-ended opinion/personal questions, and show that the system can understand the users' complex utterances (See nlu for details on NLU). Accordingly, we ask if users' speech behavior will reflect Gunrock's technical capability and conversational strategy, producing longer sentences.", "We assessed the degree of conversational depth by measuring users' mean word count. Prior work has found that an increase in word count has been linked to improved user engagement (e.g., in a social dialog system BIBREF13). For each user conversation, we extracted the overall rating, the number of turns of the interaction, and the user's per-utterance word count (averaged across all utterances). We modeled the relationship between word count and the two metrics of user engagement (overall rating, mean number of turns) in separate linear regressions.", "Results showed that users who, on average, produced utterances with more words gave significantly higher ratings ($\\beta $=0.01, SE=0.002, t=4.79, p$<$0.001)(see Figure 2) and engaged with Gunrock for significantly greater number of turns ($\\beta $=1.85, SE=0.05, t=35.58, p$<$0.001) (see Figure 2). These results can be interpreted as evidence for Gunrock's ability to handle complex sentences, where users are not constrained to simple responses to be understood and feel engaged in the conversation – and evidence that individuals are more satisfied with the conversation when they take a more active role, rather than the system dominating the dialog. On the other hand, another interpretation is that users who are more talkative may enjoy talking to the bot in general, and thus give higher ratings in tandem with higher average word counts." ], [ "We assessed the user's interest in Gunrock by tagging instances where the user triggered Gunrock's backstory (e.g., “What's your favorite color?\"). For users with at least one backstory question, we modeled overall (log) Rating with a linear regression by the (log) `Number of Backstory Questions Asked' (log transformed due to the variables' nonlinear relationship). We hypothesized that users who show greater curiosity about Gunrock will display higher overall ratings for the conversation based on her responses. Overall, the number of times users queried Gunrock's backstory was strongly related to the rating they gave at the end of the interaction (log:$\\beta $=0.10, SE=0.002, t=58.4, p$<$0.001)(see Figure 3). This suggests that maintaining a consistent personality — and having enough responses to questions the users are interested in — may improve user satisfaction." ], [ "Gunrock includes a specific topic module on animals, which includes a factual component where the system provides animal facts, as well as a more personalized component about pets. Our system is designed to engage users about animals in a more casual conversational style BIBREF14, eliciting follow-up questions if the user indicates they have a pet; if we are able to extract the pet's name, we refer to it in the conversation (e.g., “Oliver is a great name for a cat!\", “How long have you had Oliver?\"). In cases where the user does not indicate that they have a pet, the system solely provides animal facts. Therefore, the animal module can serve as a test of our interleaving strategy: we hypothesized that combining facts and personal questions — in this case about the user's pet — would lead to greater user satisfaction overall.", "We extracted conversations where Gunrock asked the user if they had ever had a pet and categorized responses as “Yes\", “No\", or “NA\" (if users did not respond with an affirmative or negative response). We modeled user rating with a linear regression model, with predictor of “Has Pet' (2 levels: Yes, No). We found that users who talked to Gunrock about their pet showed significantly higher overall ratings of the conversation ($\\beta $=0.15, SE=0.06, t=2.53, p$=$0.016) (see Figure 4). One interpretation is that interleaving factual information with more in-depth questions about their pet result in improved user experience. Yet, another interpretation is that pet owners may be more friendly and amenable to a socialbot; for example, prior research has linked differences in personality to pet ownership BIBREF15." ], [ "Gunrock is a social chatbot that focuses on having long and engaging speech-based conversations with thousands of real users. Accordingly, our architecture employs specific modules to handle longer and complex utterances and encourages users to be more active in a conversation. Analysis shows that users' speech behavior reflects these capabilities. Longer sentences and more questions about Gunrocks's backstory positively correlate with user experience. Additionally, we find evidence for interleaved dialog flow, where combining factual information with personal opinions and stories improve user satisfaction. Overall, this work has practical applications, in applying these design principles to other social chatbots, as well as theoretical implications, in terms of the nature of human-computer interaction (cf. 'Computers are Social Actors' BIBREF16). Our results suggest that users are engaging with Gunrock in similar ways to other humans: in chitchat about general topics (e.g., animals, movies, etc.), taking interest in Gunrock's backstory and persona, and even producing more information about themselves in return." ], [ "We would like to acknowledge the help from Amazon in terms of financial and technical support." ] ] } { "question": [ "What is the sample size of people used to measure user satisfaction?", "What are all the metrics to measure user engagement?", "What the system designs introduced?", "Do they specify the model they use for Gunrock?", "Do they gather explicit user satisfaction data on Gunrock?", "How do they correlate user backstory queries to user satisfaction?" ], "question_id": [ "44c7c1fbac80eaea736622913d65fe6453d72828", "3e0c9469821cb01a75e1818f2acb668d071fcf40", "a725246bac4625e6fe99ea236a96ccb21b5f30c6", "516626825e51ca1e8a3e0ac896c538c9d8a747c8", "77af93200138f46bb178c02f710944a01ed86481", "71538776757a32eee930d297f6667cd0ec2e9231" ], "nlp_background": [ "infinity", "infinity", "infinity", "infinity", "infinity", "infinity" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar" ], "paper_read": [ "no", "no", "no", "no", "no", "no" ], "search_query": [ "", "", "", "", "", "" ], "question_writer": [ "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [ "34,432 user conversations" ], "yes_no": null, "free_form_answer": "", "evidence": [ "From January 5, 2019 to March 5, 2019, we collected conversational data for Gunrock. During this time, no other code updates occurred. We analyzed conversations for Gunrock with at least 3 user turns to avoid conversations triggered by accident. Overall, this resulted in a total of 34,432 user conversations. Together, these users gave Gunrock an average rating of 3.65 (median: 4.0), which was elicited at the end of the conversation (“On a scale from 1 to 5 stars, how do you feel about talking to this socialbot again?\"). Users engaged with Gunrock for an average of 20.92 overall turns (median 13.0), with an average of 6.98 words per utterance, and had an average conversation time of 7.33 minutes (median: 2.87 min.). We conducted three principal analyses: users' response depth (wordcount), backstory queries (backstorypersona), and interleaving of personal and factual responses (pets)." ], "highlighted_evidence": [ " Overall, this resulted in a total of 34,432 user conversations. Together, these users gave Gunrock an average rating of 3.65 (median: 4.0), which was elicited at the end of the conversation (“On a scale from 1 to 5 stars, how do you feel about talking to this socialbot again?\")." ] }, { "unanswerable": false, "extractive_spans": [ "34,432 " ], "yes_no": null, "free_form_answer": "", "evidence": [ "Amazon Alexa Prize BIBREF0 provides a platform to collect real human-machine conversation data and evaluate performance on speech-based social conversational systems. Our system, Gunrock BIBREF1 addresses several limitations of prior chatbots BIBREF2, BIBREF3, BIBREF4 including inconsistency and difficulty in complex sentence understanding (e.g., long utterances) and provides several contributions: First, Gunrock's multi-step language understanding modules enable the system to provide more useful information to the dialog manager, including a novel dialog act scheme. Additionally, the natural language understanding (NLU) module can handle more complex sentences, including those with coreference. Second, Gunrock interleaves actions to elicit users' opinions and provide responses to create an in-depth, engaging conversation; while a related strategy to interleave task- and non-task functions in chatbots has been proposed BIBREF5, no chatbots to our knowledge have employed a fact/opinion interleaving strategy. Finally, we use an extensive persona database to provide coherent profile information, a critical challenge in building social chatbots BIBREF3. Compared to previous systems BIBREF4, Gunrock generates more balanced conversations between human and machine by encouraging and understanding more human inputs (see Table TABREF2 for an example).", "From January 5, 2019 to March 5, 2019, we collected conversational data for Gunrock. During this time, no other code updates occurred. We analyzed conversations for Gunrock with at least 3 user turns to avoid conversations triggered by accident. Overall, this resulted in a total of 34,432 user conversations. Together, these users gave Gunrock an average rating of 3.65 (median: 4.0), which was elicited at the end of the conversation (“On a scale from 1 to 5 stars, how do you feel about talking to this socialbot again?\"). Users engaged with Gunrock for an average of 20.92 overall turns (median 13.0), with an average of 6.98 words per utterance, and had an average conversation time of 7.33 minutes (median: 2.87 min.). We conducted three principal analyses: users' response depth (wordcount), backstory queries (backstorypersona), and interleaving of personal and factual responses (pets)." ], "highlighted_evidence": [ "Amazon Alexa Prize BIBREF0 provides a platform to collect real human-machine conversation data and evaluate performance on speech-based social conversational systems. Our system, Gunrock BIBREF1 addresses several limitations of prior chatbots BIBREF2, BIBREF3, BIBREF4 including inconsistency and difficulty in complex sentence understanding (e.g., long utterances) and provides several contributions: First, Gunrock's multi-step language understanding modules enable the system to provide more useful information to the dialog manager, including a novel dialog act scheme. ", "From January 5, 2019 to March 5, 2019, we collected conversational data for Gunrock.", "We analyzed conversations for Gunrock with at least 3 user turns to avoid conversations triggered by accident. Overall, this resulted in a total of 34,432 user conversations." ] } ], "annotation_id": [ "a7ea8bc335b1a8d974c2b6a518d4efb4b9905549", "b9f1ba799b2d213f5d7ce0b1e03adcac6ad30772" ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4", "5053f146237e8fc8859ed3984b5d3f02f39266b7" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "overall rating", "mean number of turns" ], "yes_no": null, "free_form_answer": "", "evidence": [ "We assessed the degree of conversational depth by measuring users' mean word count. Prior work has found that an increase in word count has been linked to improved user engagement (e.g., in a social dialog system BIBREF13). For each user conversation, we extracted the overall rating, the number of turns of the interaction, and the user's per-utterance word count (averaged across all utterances). We modeled the relationship between word count and the two metrics of user engagement (overall rating, mean number of turns) in separate linear regressions." ], "highlighted_evidence": [ " We modeled the relationship between word count and the two metrics of user engagement (overall rating, mean number of turns) in separate linear regressions." ] }, { "unanswerable": false, "extractive_spans": [ "overall rating", "mean number of turns" ], "yes_no": null, "free_form_answer": "", "evidence": [ "We assessed the degree of conversational depth by measuring users' mean word count. Prior work has found that an increase in word count has been linked to improved user engagement (e.g., in a social dialog system BIBREF13). For each user conversation, we extracted the overall rating, the number of turns of the interaction, and the user's per-utterance word count (averaged across all utterances). We modeled the relationship between word count and the two metrics of user engagement (overall rating, mean number of turns) in separate linear regressions." ], "highlighted_evidence": [ "We assessed the degree of conversational depth by measuring users' mean word count. Prior work has found that an increase in word count has been linked to improved user engagement (e.g., in a social dialog system BIBREF13). For each user conversation, we extracted the overall rating, the number of turns of the interaction, and the user's per-utterance word count (averaged across all utterances). We modeled the relationship between word count and the two metrics of user engagement (overall rating, mean number of turns) in separate linear regressions." ] } ], "annotation_id": [ "430a57dc6dc6a57617791e25e886c1b8d5ad6c35", "ea5628650f48b7c9dac7c9255f29313a794748e0" ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4", "5053f146237e8fc8859ed3984b5d3f02f39266b7" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "Amazon Conversational Bot Toolkit", "natural language understanding (NLU) (nlu) module", "dialog manager", "knowledge bases", "natural language generation (NLG) (nlg) module", "text to speech (TTS) (tts)" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Figure FIGREF3 provides an overview of Gunrock's architecture. We extend the Amazon Conversational Bot Toolkit (CoBot) BIBREF6 which is a flexible event-driven framework. CoBot provides ASR results and natural language processing pipelines through the Alexa Skills Kit (ASK) BIBREF7. Gunrock corrects ASR according to the context (asr) and creates a natural language understanding (NLU) (nlu) module where multiple components analyze the user utterances. A dialog manager (DM) (dm) uses features from NLU to select topic dialog modules and defines an individual dialog flow. Each dialog module leverages several knowledge bases (knowledge). Then a natural language generation (NLG) (nlg) module generates a corresponding response. Finally, we markup the synthesized responses and return to the users through text to speech (TTS) (tts). While we provide an overview of the system in the following sections, for detailed system implementation details, please see the technical report BIBREF1." ], "highlighted_evidence": [ "We extend the Amazon Conversational Bot Toolkit (CoBot) BIBREF6 which is a flexible event-driven framework. CoBot provides ASR results and natural language processing pipelines through the Alexa Skills Kit (ASK) BIBREF7. Gunrock corrects ASR according to the context (asr) and creates a natural language understanding (NLU) (nlu) module where multiple components analyze the user utterances. A dialog manager (DM) (dm) uses features from NLU to select topic dialog modules and defines an individual dialog flow. Each dialog module leverages several knowledge bases (knowledge). Then a natural language generation (NLG) (nlg) module generates a corresponding response. Finally, we markup the synthesized responses and return to the users through text to speech (TTS) (tts)." ] } ], "annotation_id": [ "7196fa2dc147c614e3dce0521e0ec664d2962f6f" ], "worker_id": [ "71f73551e7aabf873649e8fe97aefc54e6dd14f8" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": false, "free_form_answer": "", "evidence": [ "Figure FIGREF3 provides an overview of Gunrock's architecture. We extend the Amazon Conversational Bot Toolkit (CoBot) BIBREF6 which is a flexible event-driven framework. CoBot provides ASR results and natural language processing pipelines through the Alexa Skills Kit (ASK) BIBREF7. Gunrock corrects ASR according to the context (asr) and creates a natural language understanding (NLU) (nlu) module where multiple components analyze the user utterances. A dialog manager (DM) (dm) uses features from NLU to select topic dialog modules and defines an individual dialog flow. Each dialog module leverages several knowledge bases (knowledge). Then a natural language generation (NLG) (nlg) module generates a corresponding response. Finally, we markup the synthesized responses and return to the users through text to speech (TTS) (tts). While we provide an overview of the system in the following sections, for detailed system implementation details, please see the technical report BIBREF1." ], "highlighted_evidence": [ "We extend the Amazon Conversational Bot Toolkit (CoBot) BIBREF6 which is a flexible event-driven framework. CoBot provides ASR results and natural language processing pipelines through the Alexa Skills Kit (ASK) BIBREF7. Gunrock corrects ASR according to the context (asr) and creates a natural language understanding (NLU) (nlu) module where multiple components analyze the user utterances. A dialog manager (DM) (dm) uses features from NLU to select topic dialog modules and defines an individual dialog flow. Each dialog module leverages several knowledge bases (knowledge). Then a natural language generation (NLG) (nlg) module generates a corresponding response.", "While we provide an overview of the system in the following sections, for detailed system implementation details, please see the technical report BIBREF1." ] } ], "annotation_id": [ "88ef01edfa9b349e03b234f049663bd35c911e3b" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": true, "free_form_answer": "", "evidence": [ "From January 5, 2019 to March 5, 2019, we collected conversational data for Gunrock. During this time, no other code updates occurred. We analyzed conversations for Gunrock with at least 3 user turns to avoid conversations triggered by accident. Overall, this resulted in a total of 34,432 user conversations. Together, these users gave Gunrock an average rating of 3.65 (median: 4.0), which was elicited at the end of the conversation (“On a scale from 1 to 5 stars, how do you feel about talking to this socialbot again?\"). Users engaged with Gunrock for an average of 20.92 overall turns (median 13.0), with an average of 6.98 words per utterance, and had an average conversation time of 7.33 minutes (median: 2.87 min.). We conducted three principal analyses: users' response depth (wordcount), backstory queries (backstorypersona), and interleaving of personal and factual responses (pets)." ], "highlighted_evidence": [ "Together, these users gave Gunrock an average rating of 3.65 (median: 4.0), which was elicited at the end of the conversation (“On a scale from 1 to 5 stars, how do you feel about talking to this socialbot again?\")." ] } ], "annotation_id": [ "20c1065f9d96bb413f4d24665d0d30692ad2ded6" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "modeled the relationship between word count and the two metrics of user engagement (overall rating, mean number of turns) in separate linear regressions" ], "yes_no": null, "free_form_answer": "", "evidence": [ "We assessed the degree of conversational depth by measuring users' mean word count. Prior work has found that an increase in word count has been linked to improved user engagement (e.g., in a social dialog system BIBREF13). For each user conversation, we extracted the overall rating, the number of turns of the interaction, and the user's per-utterance word count (averaged across all utterances). We modeled the relationship between word count and the two metrics of user engagement (overall rating, mean number of turns) in separate linear regressions.", "Results showed that users who, on average, produced utterances with more words gave significantly higher ratings ($\\beta $=0.01, SE=0.002, t=4.79, p$<$0.001)(see Figure 2) and engaged with Gunrock for significantly greater number of turns ($\\beta $=1.85, SE=0.05, t=35.58, p$<$0.001) (see Figure 2). These results can be interpreted as evidence for Gunrock's ability to handle complex sentences, where users are not constrained to simple responses to be understood and feel engaged in the conversation – and evidence that individuals are more satisfied with the conversation when they take a more active role, rather than the system dominating the dialog. On the other hand, another interpretation is that users who are more talkative may enjoy talking to the bot in general, and thus give higher ratings in tandem with higher average word counts." ], "highlighted_evidence": [ "We modeled the relationship between word count and the two metrics of user engagement (overall rating, mean number of turns) in separate linear regressions.\n\nResults showed that users who, on average, produced utterances with more words gave significantly higher ratings ($\\beta $=0.01, SE=0.002, t=4.79, p$<$0.001)(see Figure 2) and engaged with Gunrock for significantly greater number of turns ($\\beta $=1.85, SE=0.05, t=35.58, p$<$0.001) (see Figure 2). These results can be interpreted as evidence for Gunrock's ability to handle complex sentences, where users are not constrained to simple responses to be understood and feel engaged in the conversation – and evidence that individuals are more satisfied with the conversation when they take a more active role, rather than the system dominating the dialog. On the other hand, another interpretation is that users who are more talkative may enjoy talking to the bot in general, and thus give higher ratings in tandem with higher average word counts." ] } ], "annotation_id": [ "9766d4b4b1500c83da733bd582476733ecd100ce" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ] } 2002.06644 Towards Detection of Subjective Bias using Contextualized Word Embeddings Subjective bias detection is critical for applications like propaganda detection, content recommendation, sentiment analysis, and bias neutralization. This bias is introduced in natural language via inflammatory words and phrases, casting doubt over facts, and presupposing the truth. In this work, we perform comprehensive experiments for detecting subjective bias using BERT-based models on the Wiki Neutrality Corpus(WNC). The dataset consists of$360k$labeled instances, from Wikipedia edits that remove various instances of the bias. We further propose BERT-based ensembles that outperform state-of-the-art methods like$BERT_{large}$by a margin of$5.6$F1 score. { "section_name": [ "Introduction", "Baselines and Approach", "Baselines and Approach ::: Baselines", "Baselines and Approach ::: Proposed Approaches", "Experiments ::: Dataset and Experimental Settings", "Experiments ::: Experimental Results", "Conclusion" ], "paragraphs": [ [ "In natural language, subjectivity refers to the aspects of communication used to express opinions, evaluations, and speculationsBIBREF0, often influenced by one's emotional state and viewpoints. Writers and editors of texts like news and textbooks try to avoid the use of biased language, yet subjective bias is pervasive in these texts. More than$56\\%$of Americans believe that news sources do not report the news objectively , thus implying the prevalence of the bias. Therefore, when presenting factual information, it becomes necessary to differentiate subjective language from objective language.", "There has been considerable work on capturing subjectivity using text-classification models ranging from linguistic-feature-based modelsBIBREF1 to finetuned pre-trained word embeddings like BERTBIBREF2. The detection of bias-inducing words in a Wikipedia statement was explored in BIBREF1. The authors propose the \"Neutral Point of View\" (NPOV) corpus made using Wikipedia revision history, containing Wikipedia edits that are specifically designed to remove subjective bias. They use logistic regression with linguistic features, including factive verbs, hedges, and subjective intensifiers to detect bias-inducing words. In BIBREF2, the authors extend this work by mitigating subjective bias after detecting bias-inducing words using a BERT-based model. However, they primarily focused on detecting and mitigating subjective bias for single-word edits. We extend their work by incorporating multi-word edits by detecting bias at the sentence level. We further use their version of the NPOV corpus called Wiki Neutrality Corpus(WNC) for this work.", "The task of detecting sentences containing subjective bias rather than individual words inducing the bias has been explored in BIBREF3. However, they conduct majority of their experiments in controlled settings, limiting the type of articles from which the revisions were extracted. Their attempt to test their models in a general setting is dwarfed by the fact that they used revisions from a single Wikipedia article resulting in just 100 instances to evaluate their proposed models robustly. Consequently, we perform our experiments in the complete WNC corpus, which consists of$423,823$revisions in Wikipedia marked by its editors over a period of 15 years, to simulate a more general setting for the bias.", "In this work, we investigate the application of BERT-based models for the task of subjective language detection. We explore various BERT-based models, including BERT, RoBERTa, ALBERT, with their base and large specifications along with their native classifiers. We propose an ensemble model exploiting predictions from these models using multiple ensembling techniques. We show that our model outperforms the baselines by a margin of$5.6$of F1 score and$5.95\\%$of Accuracy." ], [ "In this section, we outline baseline models like$BERT_{large}$. We further propose three approaches: optimized BERT-based models, distilled pretrained models, and the use of ensemble methods for the task of subjectivity detection." ], [ "FastTextBIBREF4: It uses bag of words and bag of n-grams as features for text classification, capturing partial information about the local word order efficiently.", "BiLSTM: Unlike feedforward neural networks, recurrent neural networks like BiLSTMs use memory based on history information to learn long-distance features and then predict the output. We use a two-layer BiLSTM architecture with GloVe word embeddings as a strong RNN baseline.", "BERT BIBREF5: It is a contextualized word representation model that uses bidirectional transformers, pretrained on a large$3.3B$word corpus. We use the$BERT_{large}$model finetuned on the training dataset." ], [ "Optimized BERT-based models: We use BERT-based models optimized as in BIBREF6 and BIBREF7, pretrained on a dataset as large as twelve times as compared to$BERT_{large}$, with bigger batches, and longer sequences. ALBERT, introduced in BIBREF7, uses factorized embedding parameterization and cross-layer parameter sharing for parameter reduction. These optimizations have led both the models to outperform$BERT_{large}$in various benchmarking tests, like GLUE for text classification and SQuAD for Question Answering.", "Distilled BERT-based models: Secondly, we propose to use distilled BERT-based models, as introduced in BIBREF8. They are smaller general-purpose language representation model, pre-trained by leveraging distillation knowledge. This results in significantly smaller and faster models with performance comparable to their undistilled versions. We finetune these pretrained distilled models on the training corpus to efficiently detect subjectivity.", "BERT-based ensemble models: Lastly, we use the weighted-average ensembling technique to exploit the predictions made by different variations of the above models. Ensembling methodology entails engendering a predictive model by utilizing predictions from multiple models in order to improve Accuracy and F1, decrease variance, and bias. We experiment with variations of$RoBERTa_{large}$,$ALBERT_{xxlarge.v2}$,$DistilRoBERTa$and$BERTand outline selected combinations in tab:experimental-results." ], [ "We perform our experiments on the WNC dataset open-sourced by the authors of BIBREF2. It consists of aligned pre and post neutralized sentences made by Wikipedia editors under the neutral point of view. It contains180k$biased sentences, and their neutral counterparts crawled from$423,823$Wikipedia revisions between 2004 and 2019. We randomly shuffled these sentences and split this dataset into two parts in a$90:10$Train-Test split and perform the evaluation on the held-out test dataset.", "For all BERT-based models, we use a learning rate of$2*10^{-5}$, a maximum sequence length of 50, and a weight decay of$0.01$while finetuning the model. We use FastText's recently open-sourced automatic hyperparameter optimization functionality while training the model. For the BiLSTM baseline, we use a dropout of$0.05$along with a recurrent dropout of$0.2$in two 64 unit sized stacked BiLSTMs, using softmax activation layer as the final dense layer." ], [ "tab:experimental-results shows the performance of different models on the WNC corpus evaluated on the following four metrics: Precision, Recall, F1, and Accuracy. Our proposed methodology, the use of finetuned optimized BERT based models, and BERT-based ensemble models outperform the baselines for all the metrics.", "Among the optimized BERT based models,$RoBERTa_{large}$outperforms all other non-ensemble models and the baselines for all metrics. It further achieves a maximum recall of$0.681$for all the proposed models. We note that DistillRoBERTa, a distilled model, performs competitively, achieving$69.69\\%$accuracy, and$0.672$F1 score. This observation shows that distilled pretrained models can replace their undistilled counterparts in a low-computing environment.", "We further observe that ensemble models perform better than optimized BERT-based models and distilled pretrained models. Our proposed ensemble comprising of$RoBERTa_{large}$,$ALBERT_{xxlarge.v2}$,$DistilRoBERTa$and$BERT$outperforms all the proposed models obtaining$0.704$F1 score,$0.733$precision, and$71.61\\%$Accuracy." ], [ "In this paper, we investigated BERT-based architectures for sentence level subjective bias detection. We perform our experiments on a general Wikipedia corpus consisting of more than$360k$pre and post subjective bias neutralized sentences. We found our proposed architectures to outperform the existing baselines significantly. BERT-based ensemble consisting of RoBERTa, ALBERT, DistillRoBERTa, and BERT led to the highest F1 and Accuracy. In the future, we would like to explore document-level detection of subjective bias, multi-word mitigation of the bias, applications of detecting the bias in recommendation systems." ] ] } { "question": [ "Do the authors report only on English?", "What is the baseline for the experiments?", "Which experiments are perfomed?" ], "question_id": [ "830de0bd007c4135302138ffa8f4843e4915e440", "680dc3e56d1dc4af46512284b9996a1056f89ded", "bd5379047c2cf090bea838c67b6ed44773bcd56f" ], "nlp_background": [ "five", "five", "five" ], "topic_background": [ "familiar", "familiar", "familiar" ], "paper_read": [ "no", "no", "no" ], "search_query": [ "bias", "bias", "bias" ], "question_writer": [ "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37" ], "answers": [ { "answer": [ { "unanswerable": true, "extractive_spans": [], "yes_no": null, "free_form_answer": "", "evidence": [], "highlighted_evidence": [] } ], "annotation_id": [ "dfc487e35ee5131bc5054463ace009e6bd8fc671" ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "FastText", "BiLSTM", "BERT" ], "yes_no": null, "free_form_answer": "", "evidence": [ "FastTextBIBREF4: It uses bag of words and bag of n-grams as features for text classification, capturing partial information about the local word order efficiently.", "BiLSTM: Unlike feedforward neural networks, recurrent neural networks like BiLSTMs use memory based on history information to learn long-distance features and then predict the output. We use a two-layer BiLSTM architecture with GloVe word embeddings as a strong RNN baseline.", "BERT BIBREF5: It is a contextualized word representation model that uses bidirectional transformers, pretrained on a large$3.3B$word corpus. We use the$BERT_{large}$model finetuned on the training dataset." ], "highlighted_evidence": [ "FastTextBIBREF4: It uses bag of words and bag of n-grams as features for text classification, capturing partial information about the local word order efficiently.", "BiLSTM: Unlike feedforward neural networks, recurrent neural networks like BiLSTMs use memory based on history information to learn long-distance features and then predict the output. We use a two-layer BiLSTM architecture with GloVe word embeddings as a strong RNN baseline.", "BERT BIBREF5: It is a contextualized word representation model that uses bidirectional transformers, pretrained on a large$3.3B$word corpus. We use the$BERT_{large}$model finetuned on the training dataset." ] }, { "unanswerable": false, "extractive_spans": [ "FastText", "BERT ", "two-layer BiLSTM architecture with GloVe word embeddings" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Baselines and Approach", "In this section, we outline baseline models like$BERT_{large}$. We further propose three approaches: optimized BERT-based models, distilled pretrained models, and the use of ensemble methods for the task of subjectivity detection.", "Baselines and Approach ::: Baselines", "FastTextBIBREF4: It uses bag of words and bag of n-grams as features for text classification, capturing partial information about the local word order efficiently.", "BiLSTM: Unlike feedforward neural networks, recurrent neural networks like BiLSTMs use memory based on history information to learn long-distance features and then predict the output. We use a two-layer BiLSTM architecture with GloVe word embeddings as a strong RNN baseline.", "BERT BIBREF5: It is a contextualized word representation model that uses bidirectional transformers, pretrained on a large$3.3B$word corpus. We use the$BERT_{large}$model finetuned on the training dataset.", "FLOAT SELECTED: Table 1: Experimental Results for the Subjectivity Detection Task" ], "highlighted_evidence": [ "Baselines and Approach\nIn this section, we outline baseline models like$BERT_{large}$. We further propose three approaches: optimized BERT-based models, distilled pretrained models, and the use of ensemble methods for the task of subjectivity detection.\n\n", "Baselines and Approach ::: Baselines\nFastTextBIBREF4: It uses bag of words and bag of n-grams as features for text classification, capturing partial information about the local word order efficiently.\n\nBiLSTM: Unlike feedforward neural networks, recurrent neural networks like BiLSTMs use memory based on history information to learn long-distance features and then predict the output. We use a two-layer BiLSTM architecture with GloVe word embeddings as a strong RNN baseline.\n\nBERT BIBREF5: It is a contextualized word representation model that uses bidirectional transformers, pretrained on a large$3.3B$word corpus. We use the$BERT_{large}$model finetuned on the training dataset.", "FLOAT SELECTED: Table 1: Experimental Results for the Subjectivity Detection Task" ] } ], "annotation_id": [ "23c76dd5ac11dd015f81868f3a8e1bafdf3d424c", "2c63f673e8658e64600cc492bc7d6a48b56c2119" ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58", "5053f146237e8fc8859ed3984b5d3f02f39266b7" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "They used BERT-based models to detect subjective language in the WNC corpus", "evidence": [ "In natural language, subjectivity refers to the aspects of communication used to express opinions, evaluations, and speculationsBIBREF0, often influenced by one's emotional state and viewpoints. Writers and editors of texts like news and textbooks try to avoid the use of biased language, yet subjective bias is pervasive in these texts. More than$56\\%$of Americans believe that news sources do not report the news objectively , thus implying the prevalence of the bias. Therefore, when presenting factual information, it becomes necessary to differentiate subjective language from objective language.", "In this work, we investigate the application of BERT-based models for the task of subjective language detection. We explore various BERT-based models, including BERT, RoBERTa, ALBERT, with their base and large specifications along with their native classifiers. We propose an ensemble model exploiting predictions from these models using multiple ensembling techniques. We show that our model outperforms the baselines by a margin of$5.6$of F1 score and$5.95\\%of Accuracy.", "Experiments ::: Dataset and Experimental Settings", "We perform our experiments on the WNC dataset open-sourced by the authors of BIBREF2. It consists of aligned pre and post neutralized sentences made by Wikipedia editors under the neutral point of view. It contains180k$biased sentences, and their neutral counterparts crawled from$423,823$Wikipedia revisions between 2004 and 2019. We randomly shuffled these sentences and split this dataset into two parts in a$90:10Train-Test split and perform the evaluation on the held-out test dataset." ], "highlighted_evidence": [ "In natural language, subjectivity refers to the aspects of communication used to express opinions, evaluations, and speculationsBIBREF0, often influenced by one's emotional state and viewpoints.", "In this work, we investigate the application of BERT-based models for the task of subjective language detection.", "Experiments ::: Dataset and Experimental Settings\nWe perform our experiments on the WNC dataset open-sourced by the authors of BIBREF2. It consists of aligned pre and post neutralized sentences made by Wikipedia editors under the neutral point of view. It contains180k$biased sentences, and their neutral counterparts crawled from$423,823$Wikipedia revisions between 2004 and 2019" ] } ], "annotation_id": [ "293dcdfb800de157c1c4be7641cd05512cc26fb2" ], "worker_id": [ "5053f146237e8fc8859ed3984b5d3f02f39266b7" ] } ] } 1809.08731 Sentence-Level Fluency Evaluation: References Help, But Can Be Spared! Motivated by recent findings on the probabilistic modeling of acceptability judgments, we propose syntactic log-odds ratio (SLOR), a normalized language model score, as a metric for referenceless fluency evaluation of natural language generation output at the sentence level. We further introduce WPSLOR, a novel WordPiece-based version, which harnesses a more compact language model. Even though word-overlap metrics like ROUGE are computed with the help of hand-written references, our referenceless methods obtain a significantly higher correlation with human fluency scores on a benchmark dataset of compressed sentences. Finally, we present ROUGE-LM, a reference-based metric which is a natural extension of WPSLOR to the case of available references. We show that ROUGE-LM yields a significantly higher correlation with human judgments than all baseline metrics, including WPSLOR on its own. { "section_name": [ "Introduction", "On Acceptability", "Method", "SLOR", "WordPieces", "WPSLOR", "Experiment", "Dataset", "LM Hyperparameters and Training", "Baseline Metrics", "Correlation and Evaluation Scores", "Results and Discussion", "Analysis I: Fluency Evaluation per Compression System", "Analysis II: Fluency Evaluation per Domain", "Incorporation of Given References", "Experimental Setup", "Fluency Evaluation", "Compression Evaluation", "Criticism of Common Metrics for NLG", "Conclusion", "Acknowledgments" ], "paragraphs": [ [ "Producing sentences which are perceived as natural by a human addressee—a property which we will denote as fluency throughout this paper —is a crucial goal of all natural language generation (NLG) systems: it makes interactions more natural, avoids misunderstandings and, overall, leads to higher user satisfaction and user trust BIBREF0 . Thus, fluency evaluation is important, e.g., during system development, or for filtering unacceptable generations at application time. However, fluency evaluation of NLG systems constitutes a hard challenge: systems are often not limited to reusing words from the input, but can generate in an abstractive way. Hence, it is not guaranteed that a correct output will match any of a finite number of given references. This results in difficulties for current reference-based evaluation, especially of fluency, causing word-overlap metrics like ROUGE BIBREF1 to correlate only weakly with human judgments BIBREF2 . As a result, fluency evaluation of NLG is often done manually, which is costly and time-consuming.", "Evaluating sentences on their fluency, on the other hand, is a linguistic ability of humans which has been the subject of a decade-long debate in cognitive science. In particular, the question has been raised whether the grammatical knowledge that underlies this ability is probabilistic or categorical in nature BIBREF3 , BIBREF4 , BIBREF5 . Within this context, lau2017grammaticality have recently shown that neural language models (LMs) can be used for modeling human ratings of acceptability. Namely, they found SLOR BIBREF6 —sentence log-probability which is normalized by unigram log-probability and sentence length—to correlate well with acceptability judgments at the sentence level.", "However, to the best of our knowledge, these insights have so far gone disregarded by the natural language processing (NLP) community. In this paper, we investigate the practical implications of lau2017grammaticality's findings for fluency evaluation of NLG, using the task of automatic compression BIBREF7 , BIBREF8 as an example (cf. Table 1 ). Specifically, we test our hypothesis that SLOR should be a suitable metric for evaluation of compression fluency which (i) does not rely on references; (ii) can naturally be applied at the sentence level (in contrast to the system level); and (iii) does not need human fluency annotations of any kind. In particular the first aspect, i.e., SLOR not needing references, makes it a promising candidate for automatic evaluation. Getting rid of human references has practical importance in a variety of settings, e.g., if references are unavailable due to a lack of resources for annotation, or if obtaining references is impracticable. The latter would be the case, for instance, when filtering system outputs at application time.", "We further introduce WPSLOR, a novel, WordPiece BIBREF9 -based version of SLOR, which drastically reduces model size and training time. Our experiments show that both approaches correlate better with human judgments than traditional word-overlap metrics, even though the latter do rely on reference compressions. Finally, investigating the case of available references and how to incorporate them, we combine WPSLOR and ROUGE to ROUGE-LM, a novel reference-based metric, and increase the correlation with human fluency ratings even further." ], [ "Acceptability judgments, i.e., speakers' judgments of the well-formedness of sentences, have been the basis of much linguistics research BIBREF10 , BIBREF11 : a speakers intuition about a sentence is used to draw conclusions about a language's rules. Commonly, “acceptability” is used synonymously with “grammaticality”, and speakers are in practice asked for grammaticality judgments or acceptability judgments interchangeably. Strictly speaking, however, a sentence can be unacceptable, even though it is grammatical – a popular example is Chomsky's phrase “Colorless green ideas sleep furiously.” BIBREF3 In turn, acceptable sentences can be ungrammatical, e.g., in an informal context or in poems BIBREF12 .", "Scientists—linguists, cognitive scientists, psychologists, and NLP researcher alike—disagree about how to represent human linguistic abilities. One subject of debates are acceptability judgments: while, for many, acceptability is a binary condition on membership in a set of well-formed sentences BIBREF3 , others assume that it is gradient in nature BIBREF13 , BIBREF2 . Tackling this research question, lau2017grammaticality aimed at modeling human acceptability judgments automatically, with the goal to gain insight into the nature of human perception of acceptability. In particular, they tried to answer the question: Do humans judge acceptability on a gradient scale? Their experiments showed a strong correlation between human judgments and normalized sentence log-probabilities under a variety of LMs for artificial data they had created by translating and back-translating sentences with neural models. While they tried different types of LMs, best results were obtained for neural models, namely recurrent neural networks (RNNs).", "In this work, we investigate if approaches which have proven successful for modeling acceptability can be applied to the NLP problem of automatic fluency evaluation." ], [ "In this section, we first describe SLOR and the intuition behind this score. Then, we introduce WordPieces, before explaining how we combine the two." ], [ "SLOR assigns to a sentence$S$a score which consists of its log-probability under a given LM, normalized by unigram log-probability and length: ", "$$\\text{SLOR}(S) = &\\frac{1}{|S|} (\\ln (p_M(S)) \\\\\\nonumber &- \\ln (p_u(S)))$$ (Eq. 8) ", " where$p_M(S)$is the probability assigned to the sentence under the LM. The unigram probability$p_u(S)$of the sentence is calculated as ", "$$p_u(S) = \\prod _{t \\in S}p(t)$$ (Eq. 9) ", "with$p(t)$being the unconditional probability of a token$t$, i.e., given no context.", "The intuition behind subtracting unigram log-probabilities is that a token which is rare on its own (in contrast to being rare at a given position in the sentence) should not bring down the sentence's rating. The normalization by sentence length is necessary in order to not prefer shorter sentences over equally fluent longer ones. Consider, for instance, the following pair of sentences: ", "$$\\textrm {(i)} ~ ~ &\\textrm {He is a citizen of France.}\\nonumber \\\\\n\\textrm {(ii)} ~ ~ &\\textrm {He is a citizen of Tuvalu.}\\nonumber$$ (Eq. 11) ", " Given that both sentences are of equal length and assuming that France appears more often in a given LM training set than Tuvalu, the length-normalized log-probability of sentence (i) under the LM would most likely be higher than that of sentence (ii). However, since both sentences are equally fluent, we expect taking each token's unigram probability into account to lead to a more suitable score for our purposes.", "We calculate the probability of a sentence with a long-short term memory (LSTM, hochreiter1997long) LM, i.e., a special type of RNN LM, which has been trained on a large corpus. More details on LSTM LMs can be found, e.g., in sundermeyer2012lstm. The unigram probabilities for SLOR are estimated using the same corpus." ], [ "Sub-word units like WordPieces BIBREF9 are getting increasingly important in NLP. They constitute a compromise between characters and words: On the one hand, they yield a smaller vocabulary, which reduces model size and training time, and improve handling of rare words, since those are partitioned into more frequent segments. On the other hand, they contain more information than characters.", "WordPiece models are estimated using a data-driven approach which maximizes the LM likelihood of the training corpus as described in wu2016google and 6289079." ], [ "We propose a novel version of SLOR, by incorporating a LM which is trained on a corpus which has been split by a WordPiece model. This leads to a smaller vocabulary, resulting in a LM with less parameters, which is faster to train (around 12h compared to roughly 5 days for the word-based version in our experiments). We will refer to the word-based SLOR as WordSLOR and to our newly proposed WordPiece-based version as WPSLOR." ], [ "Now, we present our main experiment, in which we assess the performances of WordSLOR and WPSLOR as fluency evaluation metrics." ], [ "We experiment on the compression dataset by toutanova2016dataset. It contains single sentences and two-sentence paragraphs from the Open American National Corpus (OANC), which belong to 4 genres: newswire, letters, journal, and non-fiction. Gold references are manually created and the outputs of 4 compression systems (ILP (extractive), NAMAS (abstractive), SEQ2SEQ (extractive), and T3 (abstractive); cf. toutanova2016dataset for details) for the test data are provided. Each example has 3 to 5 independent human ratings for content and fluency. We are interested in the latter, which is rated on an ordinal scale from 1 (disfluent) through 3 (fluent). We experiment on the 2955 system outputs for the test split.", "Average fluency scores per system are shown in Table 2 . As can be seen, ILP produces the best output. In contrast, NAMAS is the worst system for fluency. In order to be able to judge the reliability of the human annotations, we follow the procedure suggested by TACL732 and used by toutanova2016dataset, and compute the quadratic weighted$\\kappa $BIBREF14 for the human fluency scores of the system-generated compressions as$0.337$." ], [ "We train our LSTM LMs on the English Gigaword corpus BIBREF15 , which consists of news data.", "The hyperparameters of all LMs are tuned using perplexity on a held-out part of Gigaword, since we expect LM perplexity and final evaluation performance of WordSLOR and, respectively, WPSLOR to correlate. Our best networks consist of two layers with 512 hidden units each, and are trained for$2,000,000$steps with a minibatch size of 128. For optimization, we employ ADAM BIBREF16 ." ], [ "Our first baseline is ROUGE-L BIBREF1 , since it is the most commonly used metric for compression tasks. ROUGE-L measures the similarity of two sentences based on their longest common subsequence. Generated and reference compressions are tokenized and lowercased. For multiple references, we only make use of the one with the highest score for each example.", "We compare to the best n-gram-overlap metrics from toutanova2016dataset; combinations of linguistic units (bi-grams (LR2) and tri-grams (LR3)) and scoring measures (recall (R) and F-score (F)). With multiple references, we consider the union of the sets of n-grams. Again, generated and reference compressions are tokenized and lowercased.", "We further compare to the negative LM cross-entropy, i.e., the log-probability which is only normalized by sentence length. The score of a sentence$S$is calculated as ", "$$\\text{NCE}(S) = \\tfrac{1}{|S|} \\ln (p_M(S))$$ (Eq. 22) ", "with$p_M(S)$being the probability assigned to the sentence by a LM. We employ the same LMs as for SLOR, i.e., LMs trained on words (WordNCE) and WordPieces (WPNCE).", "Our next baseline is perplexity, which corresponds to the exponentiated cross-entropy: ", "$$\\text{PPL}(S) = \\exp (-\\text{NCE}(S))$$ (Eq. 24) ", "Due to its popularity, we also performed initial experiments with BLEU BIBREF17 . Its correlation with human scores was so low that we do not consider it in our final experiments." ], [ "Following earlier work BIBREF2 , we evaluate our metrics using Pearson correlation with human judgments. It is defined as the covariance divided by the product of the standard deviations: ", "$$\\rho _{X,Y} = \\frac{\\text{cov}(X,Y)}{\\sigma _X \\sigma _Y}$$ (Eq. 28) ", "Pearson cannot accurately judge a metric's performance for sentences of very similar quality, i.e., in the extreme case of rating outputs of identical quality, the correlation is either not defined or 0, caused by noise of the evaluation model. Thus, we additionally evaluate using mean squared error (MSE), which is defined as the squares of residuals after a linear transformation, divided by the sample size: ", "$$\\text{MSE}_{X,Y} = \\underset{f}{\\min }\\frac{1}{|X|}\\sum \\limits _{i = 1}^{|X|}{(f(x_i) - y_i)^2}$$ (Eq. 30) ", "with$f$being a linear function. Note that, since MSE is invariant to linear transformations of$X$but not of$Y$, it is a non-symmetric quasi-metric. We apply it with$Y$being the human ratings. An additional advantage as compared to Pearson is that it has an interpretable meaning: the expected error made by a given metric as compared to the human rating." ], [ "As shown in Table 3 , WordSLOR and WPSLOR correlate best with human judgments: WordSLOR (respectively WPSLOR) has a$0.025$(respectively$0.008$) higher Pearson correlation than the best word-overlap metric ROUGE-L-mult, even though the latter requires multiple reference compressions. Furthermore, if we consider with ROUGE-L-single a setting with a single given reference, the distance to WordSLOR increases to$0.048$for Pearson correlation. Note that, since having a single reference is very common, this result is highly relevant for practical applications. Considering MSE, the top two metrics are still WordSLOR and WPSLOR, with a$0.008$and, respectively,$0.002$lower error than the third best metric, ROUGE-L-mult. ", "Comparing WordSLOR and WPSLOR, we find no significant differences:$0.017$for Pearson and$0.006$for MSE. However, WPSLOR uses a more compact LM and, hence, has a shorter training time, since the vocabulary is smaller ($16,000$vs.$128,000$tokens).", "Next, we find that WordNCE and WPNCE perform roughly on par with word-overlap metrics. This is interesting, since they, in contrast to traditional metrics, do not require reference compressions. However, their correlation with human fluency judgments is strictly lower than that of their respective SLOR counterparts. The difference between WordSLOR and WordNCE is bigger than that between WPSLOR and WPNCE. This might be due to accounting for differences in frequencies being more important for words than for WordPieces. Both WordPPL and WPPPL clearly underperform as compared to all other metrics in our experiments.", "The traditional word-overlap metrics all perform similarly. ROUGE-L-mult and LR2-F-mult are best and worst, respectively.", "Results are shown in Table 7 . First, we can see that using SVR (line 1) to combine ROUGE-L-mult and WPSLOR outperforms both individual scores (lines 3-4) by a large margin. This serves as a proof of concept: the information contained in the two approaches is indeed complementary.", "Next, we consider the setting where only references and no annotated examples are available. In contrast to SVR (line 1), ROUGE-LM (line 2) has only the same requirements as conventional word-overlap metrics (besides a large corpus for training the LM, which is easy to obtain for most languages). Thus, it can be used in the same settings as other word-overlap metrics. Since ROUGE-LM—an uninformed combination—performs significantly better than both ROUGE-L-mult and WPSLOR on their own, it should be the metric of choice for evaluating fluency with given references." ], [ "The results per compression system (cf. Table 4 ) look different from the correlations in Table 3 : Pearson and MSE are both lower. This is due to the outputs of each given system being of comparable quality. Therefore, the datapoints are similar and, thus, easier to fit for the linear function used for MSE. Pearson, in contrast, is lower due to its invariance to linear transformations of both variables. Note that this effect is smallest for ILP, which has uniformly distributed targets ($\\text{Var}(Y) = 0.35$vs.$\\text{Var}(Y) = 0.17$for SEQ2SEQ).", "Comparing the metrics, the two SLOR approaches perform best for SEQ2SEQ and T3. In particular, they outperform the best word-overlap metric baseline by$0.244$and$0.097$Pearson correlation as well as$0.012$and$0.012$MSE, respectively. Since T3 is an abstractive system, we can conclude that WordSLOR and WPSLOR are applicable even for systems that are not limited to make use of a fixed repertoire of words.", "For ILP and NAMAS, word-overlap metrics obtain best results. The differences in performance, however, are with a maximum difference of$0.072$for Pearson and ILP much smaller than for SEQ2SEQ. Thus, while the differences are significant, word-overlap metrics do not outperform our SLOR approaches by a wide margin. Recall, additionally, that word-overlap metrics rely on references being available, while our proposed approaches do not require this." ], [ "Looking next at the correlations for all models but different domains (cf. Table 5 ), we first observe that the results across domains are similar, i.e., we do not observe the same effect as in Subsection \"Analysis I: Fluency Evaluation per Compression System\" . This is due to the distributions of scores being uniform ($\\text{Var}(Y) \\in [0.28, 0.36]$).", "Next, we focus on an important question: How much does the performance of our SLOR-based metrics depend on the domain, given that the respective LMs are trained on Gigaword, which consists of news data?", "Comparing the evaluation performance for individual metrics, we observe that, except for letters, WordSLOR and WPSLOR perform best across all domains: they outperform the best word-overlap metric by at least$0.019$and at most$0.051$Pearson correlation, and at least$0.004$and at most$0.014$MSE. The biggest difference in correlation is achieved for the journal domain. Thus, clearly even LMs which have been trained on out-of-domain data obtain competitive performance for fluency evaluation. However, a domain-specific LM might additionally improve the metrics' correlation with human judgments. We leave a more detailed analysis of the importance of the training data's domain for future work." ], [ "ROUGE was shown to correlate well with ratings of a generated text's content or meaning at the sentence level BIBREF2 . We further expect content and fluency ratings to be correlated. In fact, sometimes it is difficult to distinguish which one is problematic: to illustrate this, we show some extreme examples—compressions which got the highest fluency rating and the lowest possible content rating by at least one rater, but the lowest fluency score and the highest content score by another—in Table 6 . We, thus, hypothesize that ROUGE should contain information about fluency which is complementary to SLOR, and want to make use of references for fluency evaluation, if available. In this section, we experiment with two reference-based metrics – one trainable one, and one that can be used without fluency annotations, i.e., in the same settings as pure word-overlap metrics." ], [ "First, we assume a setting in which we have the following available: (i) system outputs whose fluency is to be evaluated, (ii) reference generations for evaluating system outputs, (iii) a small set of system outputs with references, which has been annotated for fluency by human raters, and (iv) a large unlabeled corpus for training a LM. Note that available fluency annotations are often uncommon in real-world scenarios; the reason we use them is that they allow for a proof of concept. In this setting, we train scikit's BIBREF18 support vector regression model (SVR) with the default parameters on predicting fluency, given WPSLOR and ROUGE-L-mult. We use 500 of our total 2955 examples for each of training and development, and the remaining 1955 for testing.", "Second, we simulate a setting in which we have only access to (i) system outputs which should be evaluated on fluency, (ii) reference compressions, and (iii) large amounts of unlabeled text. In particular, we assume to not have fluency ratings for system outputs, which makes training a regression model impossible. Note that this is the standard setting in which word-overlap metrics are applied. Under these conditions, we propose to normalize both given scores by mean and variance, and to simply add them up. We call this new reference-based metric ROUGE-LM. In order to make this second experiment comparable to the SVR-based one, we use the same 1955 test examples." ], [ "Fluency evaluation is related to grammatical error detection BIBREF19 , BIBREF20 , BIBREF21 , BIBREF22 and grammatical error correction BIBREF23 , BIBREF24 , BIBREF25 , BIBREF26 , BIBREF27 . However, it differs from those in several aspects; most importantly, it is concerned with the degree to which errors matter to humans.", "Work on automatic fluency evaluation in NLP has been rare. heilman2014predicting predicted the fluency (which they called grammaticality) of sentences written by English language learners. In contrast to ours, their approach is supervised. stent2005evaluating and cahill2009correlating found only low correlation between automatic metrics and fluency ratings for system-generated English paraphrases and the output of a German surface realiser, respectively. Explicit fluency evaluation of NLG, including compression and the related task of summarization, has mostly been performed manually. vadlapudi-katragadda:2010:SRW used LMs for the evaluation of summarization fluency, but their models were based on part-of-speech tags, which we do not require, and they were non-neural. Further, they evaluated longer texts, not single sentences like we do. toutanova2016dataset compared 80 word-overlap metrics for evaluating the content and fluency of compressions, finding only low correlation with the latter. However, they did not propose an alternative evaluation. We aim at closing this gap." ], [ "Automatic compression evaluation has mostly had a strong focus on content. Hence, word-overlap metrics like ROUGE BIBREF1 have been widely used for compression evaluation. However, they have certain shortcomings, e.g., they correlate best for extractive compression, while we, in contrast, are interested in an approach which generalizes to abstractive systems. Alternatives include success rate BIBREF28 , simple accuracy BIBREF29 , which is based on the edit distance between the generation and the reference, or word accuracy BIBREF30 , the equivalent for multiple references." ], [ "In the sense that we promote an explicit evaluation of fluency, our work is in line with previous criticism of evaluating NLG tasks with a single score produced by word-overlap metrics.", "The need for better evaluation for machine translation (MT) was expressed, e.g., by callison2006re, who doubted the meaningfulness of BLEU, and claimed that a higher BLEU score was neither a necessary precondition nor a proof of improved translation quality. Similarly, song2013bleu discussed BLEU being unreliable at the sentence or sub-sentence level (in contrast to the system-level), or for only one single reference. This was supported by isabelle-cherry-foster:2017:EMNLP2017, who proposed a so-called challenge set approach as an alternative. graham-EtAl:2016:COLING performed a large-scale evaluation of human-targeted metrics for machine translation, which can be seen as a compromise between human evaluation and fully automatic metrics. They also found fully automatic metrics to correlate only weakly or moderately with human judgments. bojar2016ten further confirmed that automatic MT evaluation methods do not perform well with a single reference. The need of better metrics for MT has been addressed since 2008 in the WMT metrics shared task BIBREF31 , BIBREF32 .", "For unsupervised dialogue generation, liu-EtAl:2016:EMNLP20163 obtained close to no correlation with human judgements for BLEU, ROUGE and METEOR. They contributed this in a large part to the unrestrictedness of dialogue answers, which makes it hard to match given references. They emphasized that the community should move away from these metrics for dialogue generation tasks, and develop metrics that correlate more strongly with human judgments. elliott-keller:2014:P14-2 reported the same for BLEU and image caption generation. duvsek2017referenceless suggested an RNN to evaluate NLG at the utterance level, given only the input meaning representation." ], [ "We empirically confirmed the effectiveness of SLOR, a LM score which accounts for the effects of sentence length and individual unigram probabilities, as a metric for fluency evaluation of the NLG task of automatic compression at the sentence level. We further introduced WPSLOR, an adaptation of SLOR to WordPieces, which reduced both model size and training time at a similar evaluation performance. Our experiments showed that our proposed referenceless metrics correlate significantly better with fluency ratings for the outputs of compression systems than traditional word-overlap metrics on a benchmark dataset. Additionally, they can be applied even in settings where no references are available, or would be costly to obtain. Finally, for given references, we proposed the reference-based metric ROUGE-LM, which consists of a combination of WPSLOR and ROUGE. Thus, we were able to obtain an even more accurate fluency evaluation." ], [ "We would like to thank Sebastian Ebert and Samuel Bowman for their detailed and helpful feedback." ] ] } { "question": [ "Is ROUGE their only baseline?", "what language models do they use?", "what questions do they ask human judges?" ], "question_id": [ "7aa8375cdf4690fc3b9b1799b0f5a9ec1c1736ed", "3ac30bd7476d759ea5d9a5abf696d4dfc480175b", "0e57a0983b4731eba9470ba964d131045c8c7ea7" ], "nlp_background": [ "two", "two", "two" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar" ], "paper_read": [ "yes", "yes", "yes" ], "search_query": [ "social", "social", "social" ], "question_writer": [ "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": false, "free_form_answer": "", "evidence": [ "Our first baseline is ROUGE-L BIBREF1 , since it is the most commonly used metric for compression tasks. ROUGE-L measures the similarity of two sentences based on their longest common subsequence. Generated and reference compressions are tokenized and lowercased. For multiple references, we only make use of the one with the highest score for each example.", "We compare to the best n-gram-overlap metrics from toutanova2016dataset; combinations of linguistic units (bi-grams (LR2) and tri-grams (LR3)) and scoring measures (recall (R) and F-score (F)). With multiple references, we consider the union of the sets of n-grams. Again, generated and reference compressions are tokenized and lowercased.", "We further compare to the negative LM cross-entropy, i.e., the log-probability which is only normalized by sentence length. The score of a sentence$S$is calculated as", "Our next baseline is perplexity, which corresponds to the exponentiated cross-entropy:", "Due to its popularity, we also performed initial experiments with BLEU BIBREF17 . Its correlation with human scores was so low that we do not consider it in our final experiments." ], "highlighted_evidence": [ "Our first baseline is ROUGE-L BIBREF1 , since it is the most commonly used metric for compression tasks.", "We compare to the best n-gram-overlap metrics from toutanova2016dataset;", "We further compare to the negative LM cross-entropy, i.e., the log-probability which is only normalized by sentence length.", "Our next baseline is perplexity, which corresponds to the exponentiated cross-entropy:", "Due to its popularity, we also performed initial experiments with BLEU BIBREF17 . " ] }, { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "No, other baseline metrics they use besides ROUGE-L are n-gram overlap, negative cross-entropy, perplexity, and BLEU.", "evidence": [ "Our first baseline is ROUGE-L BIBREF1 , since it is the most commonly used metric for compression tasks. ROUGE-L measures the similarity of two sentences based on their longest common subsequence. Generated and reference compressions are tokenized and lowercased. For multiple references, we only make use of the one with the highest score for each example.", "We compare to the best n-gram-overlap metrics from toutanova2016dataset; combinations of linguistic units (bi-grams (LR2) and tri-grams (LR3)) and scoring measures (recall (R) and F-score (F)). With multiple references, we consider the union of the sets of n-grams. Again, generated and reference compressions are tokenized and lowercased.", "We further compare to the negative LM cross-entropy, i.e., the log-probability which is only normalized by sentence length. The score of a sentence$S\$ is calculated as", "Our next baseline is perplexity, which corresponds to the exponentiated cross-entropy:", "Due to its popularity, we also performed initial experiments with BLEU BIBREF17 . Its correlation with human scores was so low that we do not consider it in our final experiments." ], "highlighted_evidence": [ "Our first baseline is ROUGE-L BIBREF1 , since it is the most commonly used metric for compression tasks.", "We compare to the best n-gram-overlap metrics from toutanova2016dataset;", "We further compare to the negative LM cross-entropy", "Our next baseline is perplexity, ", "Due to its popularity, we also performed initial experiments with BLEU BIBREF17 " ] } ], "annotation_id": [ "24ebf6cd50b3f873f013cd206aa999a4aa841317", "d04c757c5a09e8a9f537d15bdd93ac4043c7a3e9" ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "5d0eb97e8e840e171f73b7642c2c89dd3984157b" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "LSTM LMs" ], "yes_no": null, "free_form_answer": "", "evidence": [ "We calculate the probability of a sentence with a long-short term memory (LSTM, hochreiter1997long) LM, i.e., a special type of RNN LM, which has been trained on a large corpus. More details on LSTM LMs can be found, e.g., in sundermeyer2012lstm. The unigram probabilities for SLOR are estimated using the same corpus.", "We train our LSTM LMs on the English Gigaword corpus BIBREF15 , which consists of news data." ], "highlighted_evidence": [ "We calculate the probability of a sentence with a long-short term memory (LSTM, hochreiter1997long) LM, i.e., a special type of RNN LM, which has been trained on a large corpus.", "We train our LSTM LMs on the English Gigaword corpus BIBREF15 , which consists of news data." ] } ], "annotation_id": [ "5ecd71796a1b58d848a20b0fe4be06ee50ea40fb" ], "worker_id": [ "5d0eb97e8e840e171f73b7642c2c89dd3984157b" ] }, { "answer": [ { "unanswerable": true, "extractive_spans": [ "" ], "yes_no": null, "free_form_answer": "", "evidence": [], "highlighted_evidence": [] } ], "annotation_id": [ "3fd01f74c49811127a1014b99a0681072e1ec34d" ], "worker_id": [ "5d0eb97e8e840e171f73b7642c2c89dd3984157b" ] } ] }
1707.00995
An empirical study on the effectiveness of images in Multimodal Neural Machine Translation
In state-of-the-art Neural Machine Translation (NMT), an attention mechanism is used during decoding to enhance the translation. At every step, the decoder uses this mechanism to focus on different parts of the source sentence to gather the most useful information before outputting its target word. Recently, the effectiveness of the attention mechanism has also been explored for multimodal tasks, where it becomes possible to focus both on sentence parts and image regions that they describe. In this paper, we compare several attention mechanism on the multimodal translation task (English, image to German) and evaluate the ability of the model to make use of images to improve translation. We surpass state-of-the-art scores on the Multi30k data set, we nevertheless identify and report different misbehavior of the machine while translating.