ACL-OCL / Base_JSON /prefixI /json /inlg /2020.inlg-1.26.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
raw
history blame contribute delete
No virus
131 kB
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:28:26.920445Z"
},
"title": "Listener's Social Identity Matters in Personalised Response Generation",
"authors": [
{
"first": "Guanyi",
"middle": [],
"last": "Chen",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Utrecht University",
"location": {}
},
"email": "g.chen@uu.nl"
},
{
"first": "Yinhe",
"middle": [],
"last": "Zheng",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Tsinghua University",
"location": {}
},
"email": "yh.zheng@samsung.com"
},
{
"first": "Yupei",
"middle": [],
"last": "Du",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Utrecht University",
"location": {}
},
"email": "y.du@uu.nl"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Personalised response generation enables generating human-like responses by means of assigning the generator a social identity. However, pragmatics theory suggests that human beings adjust the way of speaking based on not only who they are but also whom they are talking to. In other words, when modelling personalised dialogues, it might be favourable if we also take the listener's social identity into consideration. To validate this idea, we use gender as a typical example of a social variable to investigate how the listener's identity influences the language used in Chinese dialogues on social media. Also, we build personalised generators. The experiment results demonstrate that the listener's identity indeed matters in the language use of responses and that the response generator can capture such differences in language use. More interestingly, by additionally modelling the listener's identity, the personalised response generator performs better in its own identity.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Personalised response generation enables generating human-like responses by means of assigning the generator a social identity. However, pragmatics theory suggests that human beings adjust the way of speaking based on not only who they are but also whom they are talking to. In other words, when modelling personalised dialogues, it might be favourable if we also take the listener's social identity into consideration. To validate this idea, we use gender as a typical example of a social variable to investigate how the listener's identity influences the language used in Chinese dialogues on social media. Also, we build personalised generators. The experiment results demonstrate that the listener's identity indeed matters in the language use of responses and that the response generator can capture such differences in language use. More interestingly, by additionally modelling the listener's identity, the personalised response generator performs better in its own identity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Persona plays an important role in our daily communication since it affects the way we render our dialogues. Social variables, such as gender, age, place of birth or even wealth and social status, account for a large proportion in each individual's persona. Numerous previous studies have suggested that these variables strongly affect each speaker's word preference in dialogues. A growing body of works has been carried out to implicitly or explicitly model these variables in dialogues (Li et al., 2016b; Qian et al., 2017; Kottur et al., 2017; Zheng et al., 2019 .",
"cite_spans": [
{
"start": 489,
"end": 507,
"text": "(Li et al., 2016b;",
"ref_id": "BIBREF18"
},
{
"start": 508,
"end": 526,
"text": "Qian et al., 2017;",
"ref_id": "BIBREF29"
},
{
"start": 527,
"end": 547,
"text": "Kottur et al., 2017;",
"ref_id": "BIBREF15"
},
{
"start": 548,
"end": 566,
"text": "Zheng et al., 2019",
"ref_id": "BIBREF38"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Despite the reported success, most previous studies for personalised dialogue modelling consider only the persona of speakers. 1 Nevertheless, the 1 For using the terminology consistently, we use \"speaker\" pragmatics theory suggests that the speaking style will be adjusted not only by who the speaker is, but also whom the speaker is talking to (Wish et al., 1976; Hovy, 1987) . In the computational linguistics community, Dinan et al. (2020) investigates this issue by measuring and mitigating gender bias in dialogue dataset utilising a gender classifier. From the aspect of personalised dialogue generation, and tried to attach the listener persona to the encoder of their generator, but interestingly, they obtained very different results, namely, the performance of went down while that of went up.",
"cite_spans": [
{
"start": 346,
"end": 365,
"text": "(Wish et al., 1976;",
"ref_id": "BIBREF35"
},
{
"start": 366,
"end": 377,
"text": "Hovy, 1987)",
"ref_id": "BIBREF9"
},
{
"start": 424,
"end": 443,
"text": "Dinan et al. (2020)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Nonetheless, no systematic studies have been conducted to investigate what role does the listener's identity play in personalised response generation. Research questions that we wish to answer by the proposal put forward in this paper are:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "1. How the listener's social identity impacts the responder's language use;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2. Can a response generator capture this impact, referring to the person who produces the response (who is also a personalised dialogue system heading to model) and \"listener\" referring to the one who utters the post.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "if yes, in which way?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To this end, we apply analysis and build a response generator on a Chinese personalised dialog dataset: PERSONALDIALOG, a corpus extracted from Weibo 2 . There are two reasons to use this dataset: one is that the PERSONALDIALOG dataset origins from the real conversations on social media Weibo, in which speakers' social variables play an important role; the other is that this dataset provides a massive amount of dialogue data (over 20M sessions) between a large population of speakers (over 8M speakers) . It is of sufficient size to capture a variety of linguistic phenomena that are associated with social variables. Each speaker/listener in PER-SONALDIALOG comes up with 4 social variables: gender, age, location, and interests. For simplicity and for conducting controlled analysis and experiments, we only focus on gender in this paper.",
"cite_spans": [
{
"start": 429,
"end": 448,
"text": "(over 20M sessions)",
"ref_id": null
},
{
"start": 488,
"end": 506,
"text": "(over 8M speakers)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "As for the first research question, we postulate that a speaker behave differently when s/he speaks to people with different gender stylistically. This yields four possible speaking styles: ff, mf, fm, and mm 3 . We, therefore, build a classifier to separate these styles defining on \"gender-pairs\". Previous analysis on blogging data (Schler et al., 2006; Goswami et al., 2009; Nguyen et al., 2011; Bamman et al., 2014) has identified that one of the key features for distinguishing contents produced by a female from those by a male is the sentence length, i.e., females tend to utter longer sentences. As shown in Figure 1 , the same phenomenon is found in PERSONALDIALOG: females' responses are generally longer than males'. Further statistics on the response length falling the above four styles suggest that gender-pairs are also separable, perhaps excepting mf and fm at first glance. To validate this and understand why, we build a gender-pair classifier and conduct so-called pivot word analysis. We find out which word contributes the most for helping the classifier make decisions. Experiment results show that these styles are separable, but mf and fm are often confused with each other.",
"cite_spans": [
{
"start": 335,
"end": 356,
"text": "(Schler et al., 2006;",
"ref_id": "BIBREF31"
},
{
"start": 357,
"end": 378,
"text": "Goswami et al., 2009;",
"ref_id": "BIBREF7"
},
{
"start": 379,
"end": 399,
"text": "Nguyen et al., 2011;",
"ref_id": "BIBREF25"
},
{
"start": 400,
"end": 420,
"text": "Bamman et al., 2014)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 617,
"end": 625,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "As for the second research question, we build a personalised response generator conditioning on these styles. The outcomes suggest that the generator could capture the difference between those styles and, in addition, modelling the listener's identity helps the generator to express its own identity. Moreover, based on previous analyses, we have also tried to merge the style of mf and fm into a single integrated style mf/fm. However, the final results of the response generator suggests that it is hard to model utterances with this integrated style.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To approach the first research question, we build a gender-pair classifier to simultaneously recognise the speaker's and listener's social identity based on the dialogue utterances. Concretely, as aforementioned in section 1, we assume the present task as a style classification task and design four labels for each input dialogue utterance: mm (male talking to male), mf (female talking to male), fm (male talking to female), and ff (female talking to female). However, in light of the Linguistic Style Matching theory (Niederhoffer and Pennebaker, 2002) , speakers will imitate the linguistic style of their conversation companion to pursue higher engagement. In other words, when two different gendered speakers communicate with each other, their speaking style may assimilate to each other as the conversation proceed. On the top of this observation, one may say that dissociating fm and mf is hard, and, therefore, it would be favourable if we merge fm and mf into a single category, namely mf/fm.",
"cite_spans": [
{
"start": 520,
"end": 555,
"text": "(Niederhoffer and Pennebaker, 2002)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Gender-Pair Classification",
"sec_num": "2"
},
{
"text": "Building on what has been discussed, to further get insight from conventional gender classification, we consider the following three classification tasks basing on three speaking style categorisation schemes: 1) two-way classification: classifying only speakers' gender, in which two labels are used: male and female; 2) three-way classification: classifying the conversational texts based on a merged labelling scheme, i.e., three labels are considered mm, fm/mf, and ff; and 3) four-way classification: the gender-pair classification which classifies the conversational texts into mm, fm, mf, and ff.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Build Gender-Pair Classifiers",
"sec_num": "2.1"
},
{
"text": "We test a number of text classification algorithms, including fastText 4 (Joulin et al., 2017) , TextCNN (Kim, 2014) and LSTM (Hochreiter and Schmidhuber, 1997 ) (in which the hidden states of all the tokens are max pooled before being feed into the final Softmax layer). In order to conduct interpretable analysis, we train a Bag-of-Word (BOW) classifier: a logistic regression with only unigram features.",
"cite_spans": [
{
"start": 73,
"end": 94,
"text": "(Joulin et al., 2017)",
"ref_id": "BIBREF11"
},
{
"start": 105,
"end": 116,
"text": "(Kim, 2014)",
"ref_id": "BIBREF14"
},
{
"start": 126,
"end": 159,
"text": "(Hochreiter and Schmidhuber, 1997",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Classification Models",
"sec_num": "2.1.1"
},
{
"text": "Building on the fact that classifying the social variables based on the social media data is hard (Nguyen et al., 2013 (Nguyen et al., , 2014 , and the exhibition of speakers' social identities is sparse in social media text , we adopt the classification strategy used by Zheng et al. (2019) . Specifically, each classifier input is a concatenations of N randomly sampled responses with the same style. In this study, we use N = 20. We train and test the classifiers on PERSONALDIALOG, where the dataset has been divided into training and testing sets without overlapping. The training data are down-sampled to balance the corpus. 10% of the training set is held out for tuning parameters, and the final models are trained on the whole training set. The classifiers are evaluated using F1 scores. Table 1 depicts the performances of these classifiers. FastText performs remarkably well. It outper- 4 The official implementation of fastText from Facebook is used: https://github.com/facebookresearch/ fastText. forms both TextCNN and LSTM, which are models having much higher complexity and capacity. It is surprising that the simplest BOW classifier also achieves comparably good performance, which suggests that the word usage is the most important feature for distinguishing speakers' social identity (at least for the gender). Further comparison of the fastText and BOW classifier embodies that the unigram features are sufficient for conducting gender classification in the coarse 2-way classification setting, while higher-ordered N-gram features (used by the fastText) are useful in more fine-grained 3-way and 4-way classification settings.",
"cite_spans": [
{
"start": 98,
"end": 118,
"text": "(Nguyen et al., 2013",
"ref_id": "BIBREF23"
},
{
"start": 119,
"end": 141,
"text": "(Nguyen et al., , 2014",
"ref_id": "BIBREF26"
},
{
"start": 272,
"end": 291,
"text": "Zheng et al. (2019)",
"ref_id": "BIBREF38"
},
{
"start": 898,
"end": 899,
"text": "4",
"ref_id": null
}
],
"ref_spans": [
{
"start": 797,
"end": 804,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "2.1.2"
},
{
"text": "The F1 score of the 4-way gender-pair classification using fastText reaches 0.68. This means that it is feasible to identify the style of the listener by only considering the utterances issued by the speaker. We print the confusion matrix of this result in Figure 2 . The utterances from ff and mm are rarely confused with each other. This indicates that the language use of both males and females have clear differences when they speak to people with the same gender When they talk to people with different gender, in line with the results of gender classification, they tend to express stylistic characteristics related to their own gender since confusions appear between fm and mm as well as between mf and ff. Nonetheless, we also observe equally severe confusion between fm and mf, which approves that the linguistic style matching hypothesis plays a certain role when people expressing their social identities.",
"cite_spans": [],
"ref_spans": [
{
"start": 257,
"end": 265,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "2.1.3"
},
{
"text": "In addition, we also observe a certain level of confusion between fm and ff as well as between mf and mm. This said, the classifier sometimes confuse between, for example, an utterance from a male and an utterance from a female when they both speak to male listeners. This, yet again, could be seen as an evidence for the existence of linguistic style matching. Although the utterance from fm and mf shows a tendency of assimilation, it appears that the speakers still maintain the characteristics of their own gender and, in this sense, there are still certain reasons to disassociate the style of fm from mf.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "2.1.3"
},
{
"text": "To understand how people change their language use with respect to social identities of themselves and of whom they speak to, or, in other words, to understand how the gender-pair classifiers make end if 12: end for 13: return All t in \u2126 c if p(t, s) > \u03b2 for all s \u2208 S their decisions, we apply the Pivot Word Analysis. Pivot words are words that have substantial influence on the classifier's decision making and have been widely used for interpreting the language use in many language generation tasks such as Style Transfer (Fu et al., 2019) and Table- to-Text Generation (Ma et al., 2019 ).",
"cite_spans": [
{
"start": 527,
"end": 544,
"text": "(Fu et al., 2019)",
"ref_id": "BIBREF5"
},
{
"start": 575,
"end": 591,
"text": "(Ma et al., 2019",
"ref_id": "BIBREF22"
}
],
"ref_spans": [
{
"start": 549,
"end": 555,
"text": "Table-",
"ref_id": null
}
],
"eq_spans": [],
"section": "Pivot Word Discovery",
"sec_num": "2.2"
},
{
"text": "Since the expression of social identity is sparse in the social media data, the appearance of pivot words in the utterance is also sparse. Therefore, the pivot word discovery algorithms introduced in (Fu et al., 2019) and (Ma et al., 2019) are not applicable in the present task. Instead, we use a simple yet efficient pivot word discovery algorithm coined as Classifier-based Pivot Word Discovery for extracting pivot words using the trained BOW classifier.",
"cite_spans": [
{
"start": 200,
"end": 217,
"text": "(Fu et al., 2019)",
"ref_id": "BIBREF5"
},
{
"start": 222,
"end": 239,
"text": "(Ma et al., 2019)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Pivot Word Extraction Algorithm.",
"sec_num": "2.2.1"
},
{
"text": "The algorithm is of finding out which word type in the training data plays a major role in the BOW classifier's decision-making. It is sketched in Algorithm 1. As can be seen from lines 2-5, this algorithm only considers samples that have been correctly classified. For each word type t in a sample x, it compares the classification results and confidences when including and excluding t in x (lines 2-8). Specifically, if the classifier's predicted result is changed or the prediction confidence's change exceeds a certain threshold of \u03b2, we extract it as a pivot word candidate (line 10). If the same word type has been extracted as a candidate for more than \u03b1 times under a single category, the algorithm returns it as a pivot word (line 15). In this work, we set \u03b1 and \u03b2 to 10 and 0.5, respectively. 2.2.2 Extracted Pivot Words. Table 2 lists typical examples of the extracted pivot words in each category for the gender classifier and the gender-pair classifier. As for the gender classification, we observe that the general topics used by males and females have clear differences on Weibo. Specifically, males focus on the topic of digital products, politics, and games while females like talking about starstruck, teleplays, makeup, and shopping. It is worth noting that one reason that Weibo users concentrate on these topics is that most of them are young people according to the statistics in Zheng et al. (2019) . These topics might change if use data extracted in more recent years since the PERSONALDIALOG dataset was crawled in 2018.",
"cite_spans": [
{
"start": 1403,
"end": 1422,
"text": "Zheng et al. (2019)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [
{
"start": 833,
"end": 840,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Pivot Word Extraction Algorithm.",
"sec_num": "2.2.1"
},
{
"text": "More interestingly, we also find that differences exist in the use of punctuation and pronouns. Males use punctuation in a more formal way on social media (in which comma and period are frequently used), but females eager to concatenate a sequence of punctuation to express certain emotions or speech acts (e.g., \"\u223c\u223c\", \"!!!!\"). The first person pronoun was extracted as pivot word for the female category, which might suggest that males are more likely to drop pronoun on social media. 5 To say the last word on how the use of zero pronouns is affected by the speaker's social identity needs further research, which is not the focus of this paper.",
"cite_spans": [
{
"start": 486,
"end": 487,
"text": "5",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Pivot Word Extraction Algorithm.",
"sec_num": "2.2.1"
},
{
"text": "As for comparing the extracted pivot words for the gender-pair classifier and the gender classifier, in line with the classification results detailed in section 2.1, we observe more overlaps between female and ff as well as male and mm than between female and mf as well as male and fm. When comparing the words from different gender-pair categories, we find that people would talk about different topics when they talk to people of the same gender and with a different gender. For example, when a female talks to another female, they discuss \"idols\" they like, shopping, and dressing, which are rarely mentioned when she talks to a male. These observations explain why utterances with style mf (fm) are separable from those with style ff (mm) and suggest that the identities of listeners really matter the way of how speakers speaking.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pivot Word Extraction Algorithm.",
"sec_num": "2.2.1"
},
{
"text": "As for the linguistic matching hypothesis, some evidences have been found. For example, fm and mf shared some topics including travelling, studying, working or gaming. Moreover, first person pronouns are more likely to be used when males speaking to females, but similar matching not appears in the use of punctuation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pivot Word Extraction Algorithm.",
"sec_num": "2.2.1"
},
{
"text": "In order to quantify how the gender-pair influences the language use, we do a Pivot Free Classification experiment, where the BOW classifier is evaluated on the test data, in which the pivot words from a certain category are removed. Since we care about, by removing the pivot words, how many samples of a category are mis-classified into other categories, we report the recall scores in Table 3 . We test the performance of the gender-pair classifier \"attacked\" by pivot words extracted by the gender-pair and the gender classifier. We name the category on which we report the performance as the target category and the category from which we extract the pivot words as the source category.",
"cite_spans": [],
"ref_spans": [
{
"start": 388,
"end": 395,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Pivot Free Classification",
"sec_num": "2.3"
},
{
"text": "On the basis of the results in Table 3 , we have the following observations: First, the performance reduces to almost zero if the source and the target are the same categories, which implies that the extracted pivot words are those which actually bias the decision making of the classifier. Second, ff and mm are definitely separable as no impact is found when they \"attack\" each other. Third, in line with the previous findings and the linguistic style matching theory, mf and fm are highly confused with each other, which can be approved from two dimensions: 1) as source categories, they highly reduce each other's performance; 2) Pivot words from female have remarkably effects on not only mf and ff but also fm. Fourth, mf and fm are not exactly the same, since, for instance, the impact of mf on ff is clearly higher than that of fm on ff. Last, the style of a conversation for speakers with a different gender is more similar to the style of how females speak.",
"cite_spans": [],
"ref_spans": [
{
"start": 31,
"end": 38,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Pivot Free Classification",
"sec_num": "2.3"
},
{
"text": "For exploring the second research question, that is, can a personalised response generator capture the differences of language use when imitating a speaker talking to listeners with different social identities? We train multiple response generators conditioning on the three style categorising schemes mentioned in section 2. We start by introducing the basic architecture of our generator and the experimental settings. We then describe the evaluation metrics we use, with which we evaluate and analyse the generators.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Personalised Response Generation",
"sec_num": "3"
},
{
"text": "Since inventing a new state-of-the-art personalised response generator falls out of the scope of this paper, we build the model following a simplified paradigm of Zheng et al. (2020a,b) . The architecture of the model we used is sketched in Figure 3 .",
"cite_spans": [
{
"start": 163,
"end": 185,
"text": "Zheng et al. (2020a,b)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 241,
"end": 249,
"text": "Figure 3",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "The Personalised Response Generator",
"sec_num": "3.1"
},
{
"text": "Concretely, given the dataset, containing N dialogue pairs with each of their style:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Personalised Response Generator",
"sec_num": "3.1"
},
{
"text": "D = {(x 1 , y 1 , s 1 ), ..., (x N , y N , s N )},",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Personalised Response Generator",
"sec_num": "3.1"
},
{
"text": "where x i is the post, y i is the response, and s i is the style label of that response (i.e., in our case, it could be female or fm). As depicted in Figure 3 , each post x is firstly mapped into word embedding space using e w (\u2022) and then is encoded via a Transformer (Vaswani et al., 2017) based encoder to a representation E x .",
"cite_spans": [
{
"start": 269,
"end": 291,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [
{
"start": 150,
"end": 158,
"text": "Figure 3",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "The Personalised Response Generator",
"sec_num": "3.1"
},
{
"text": "Following , in the decoding phase, we inject the style information by utilising the attention routing mechanism. Specifically, different from the standard Transformer decoder, both multi-head attention (MHA) and masked multihead attention (MMHA) are deployed. In each decoder block, given the E x and the embedded previously decoded response E ypre = e w (y pre ), they are encoded to:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Encoding Style Information",
"sec_num": "3.1.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "R pre = MMHA(E ypre , E ypre , E ypre ) (1) R post = MHA(E ypre , E x , E x )",
"eq_num": "(2)"
}
],
"section": "Encoding Style Information",
"sec_num": "3.1.1"
},
{
"text": "Together with the mapped style, E s is mapped using the style embedding e s (s). These set of representations are merged in the following way to R before being feed for layer normalisation:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Encoding Style Information",
"sec_num": "3.1.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "R = (R pre + R post )/2 + E ypre + E s",
"eq_num": "(3)"
}
],
"section": "Encoding Style Information",
"sec_num": "3.1.1"
},
{
"text": "in which, R pre and R post are averaged. Despite of the simplicity, one major reason of why we do not use the original model of in this study is that they did not encode personae (i.e., gender in our case) of speaker and listener symmetrically. To be more specific, they encode the persona of the listener as a number of style embeddings, which were added to the input together with the positional embeddings, while the speaker's persona was encoded as a sequence of words and was concatenated with the embeded post x. This kind of disassociation makes our experiments less controlled. Instead, in this study, we merge the label for speakers and listeners (i.e., the label such as mf) and map it into a single style embedding.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Encoding Style Information",
"sec_num": "3.1.1"
},
{
"text": "Encoders and decoders in our model are sharing their parameters. To further increase the quality of the generated responses, akin to many previous research in dialogue modelling (Wolf et al., 2019; Wang et al., 2020a) , we initialise the parameters in our model using a pre-trained Chinese GPT model (Radford et al., 2019; Wang et al., 2020b) .",
"cite_spans": [
{
"start": 178,
"end": 197,
"text": "(Wolf et al., 2019;",
"ref_id": "BIBREF36"
},
{
"start": 198,
"end": 217,
"text": "Wang et al., 2020a)",
"ref_id": "BIBREF33"
},
{
"start": 300,
"end": 322,
"text": "(Radford et al., 2019;",
"ref_id": "BIBREF30"
},
{
"start": 323,
"end": 342,
"text": "Wang et al., 2020b)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Sharing and Pre-training.",
"sec_num": "3.1.2"
},
{
"text": "We train and evaluate the model on the PERSONAL-DIALOG dataset. For simplicity, in line with Zheng et al. (2019), we only train and test our model using the first turn of each dialogue session in PER-SONALDIALOG. For conducting a controlled and fair analysis, we train three models corresponding to the three style categorisation schemes introduced in section 2 (see Table 4 ). In the following sections, we refer them with their ID, i.e., model 1, 2, or 3.",
"cite_spans": [],
"ref_spans": [
{
"start": 367,
"end": 374,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "3.2.1"
},
{
"text": "Recall that our target is not of defeating state-ofthe-art personalised response generator in the sense of generating better responses. Nonetheless, we still report some relevant results using commonly used automatic metrics including: BLUE (Papineni et al., 2002) , a metric comparing overlaps of n-grams (n = 1, 2) between the reference responses and the generated responses for evaluating the adequacy and fluency; and DIST (Li et al., 2016a) , measuring the proportion of distinct ngrams (n = 1) for evaluating the diversity of the model outputs. To help obtaining insights from the system outputs for the second research question, we design a number of new metrics based on the built classifiers and extracted pivot words from section 2. Specifically, for evaluating a model with n style categories, we propose the following metrics:",
"cite_spans": [
{
"start": 241,
"end": 264,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF28"
},
{
"start": 427,
"end": 445,
"text": "(Li et al., 2016a)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "3.2.2"
},
{
"text": "1. ACC. evaluates whether the generated responses incorporate the target style using the trained n-way classifier. Similar approach is employed to evaluate the outputs of conditional language generators with off-line classifiers (Zhou et al., 2018; Zheng et al., 2019; Li et al., 2020) . During evaluation, the system outputs are concatenated in the same way as the train data of those classifiers. Considering the speed and the performance, we use the fastText classifier in the evaluation;",
"cite_spans": [
{
"start": 229,
"end": 248,
"text": "(Zhou et al., 2018;",
"ref_id": "BIBREF41"
},
{
"start": 249,
"end": 268,
"text": "Zheng et al., 2019;",
"ref_id": "BIBREF38"
},
{
"start": 269,
"end": 285,
"text": "Li et al., 2020)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "3.2.2"
},
{
"text": "2. ACC-2. evaluates whether the generated response reflect gender information using the trained gender classifier. It is worth noting that this metric is not applicable to model 2 since we have merged the mf and fm, we expect that they are no longer separable;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "3.2.2"
},
{
"text": "3. Pivot Word Precision (PWP). evaluates to what proportion the generated tokens are pivot words. Suppose the system outputs with style s is\u0176 s with the vocabulary V and the pivot words extracted by n-way classifier is \u2126 s , the PWP is computed by:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "3.2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "PWP s = w\u2208\u2126s #(w,\u0176 s ) w\u2208V #(w,\u0176 s )",
"eq_num": "(4)"
}
],
"section": "Evaluation Metrics",
"sec_num": "3.2.2"
},
{
"text": "where #(w,\u0176 s ) is the frequency of w in\u0176 s . PWP is calculated for each style and is then micro-averaged;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "3.2.2"
},
{
"text": "4. Pivot Word Recall (PWR). evaluates how many word types in pivot words has been generated:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "3.2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "PWR s = w\u2208\u2126s I(w,\u0176 s ) |\u2126 s |",
"eq_num": "(5)"
}
],
"section": "Evaluation Metrics",
"sec_num": "3.2.2"
},
{
"text": "where I(w,\u0176 s ) equals to one if w appears in Y s , otherwise it equals to 0. Table 4 charts the results of all the metrics above. It is not surprising that no significant difference is found in BLEU and DIST score between all three models since all of them have the same model architecture, the same parameter setting and, thus, the same capacity. Due to the fact that different off-line classifiers have very different performance in their own domain (see Table 1 ), it is not fair to compare the value of ACC and ACC-2 across different dialogue generation models. However, taking other metrics Table 5 : The results of cross-category PWR scores. Same as Table 3 , categories in first row means where the pivot words from and categories in first column means where the system outputs from.",
"cite_spans": [],
"ref_spans": [
{
"start": 78,
"end": 85,
"text": "Table 4",
"ref_id": "TABREF5"
},
{
"start": 458,
"end": 465,
"text": "Table 1",
"ref_id": null
},
{
"start": 597,
"end": 604,
"text": "Table 5",
"ref_id": null
},
{
"start": 657,
"end": 664,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "3.2.2"
},
{
"text": "into account, we still have some interesting findings. One is that all the ACC results are better than random, which somehow suggest that all of these models have captured the differences of language use under each style. The other is that although model 2 has the highest PWR and moderate level of PWP, but, meanwhile, it has the lowest ACC. In other words, it generates lots of pivot words, but the classifier does not classify them into the correct style. To understand why, we analysed the PWP for each style, and found that it works fine for ff (69.02) and mm (45.96), but collapses at the merged category, i.e., mf/fm. It obtains a PWP at only 25.82 and a PWR at 56.95 (which is not a very bad number). It appears that although the generator has produced fine amount of pivot words for expressing the style of mf/fm, but, the frequency of many of them might not be high. This also suggests that even though we found some evidences from experiments in section 2 supporting the theory of linguistic matching and the merging of mf and fm, but it seems that the generator we use cannot handle this.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "3.2.3"
},
{
"text": "More interestingly, we also find that model 3 not only has the highest performance on PWP, which means more than half of the tokens it produces are pivot words of the correct style, but also has the highest score on ACC-2 (i.e., the accuracy of gender classification), which is even better than model 1, a model that originally designed having two styles. This approves that by additionally modelling the social identities of the listeners, it helps the generator to utter more speaker identity related words because it takes the difference on speaking style when talking to listeners with different social identities into account.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "3.2.3"
},
{
"text": "To understand how model 3 works, we consider similar experiment to the one in section 2.3 by measuring the cross-category PWR. From Table 5 , we observe similar phenomenon as in section 2.3. For example, the pair mm and ff yields the lowest PWR when being as the pivot word source of each other. In contrast, they reach the highest score if they are their own pivot word source. fm and mf have relatively high PWR when being each other's pivot word source. When a male talks to an another male, they say very few words that females always say. Nevertheless, we also observe that sentences produced by ff always have the highest PWR regardless of where the pivot words are coming from. This should be a result of two reasons: most conversations in PERSONALDIALOG dataset are between two females and PRR is a metric that sensitive to the size of test data (i.e., it is very likely that the more sentences are produced the more pivot words are included).",
"cite_spans": [],
"ref_spans": [
{
"start": 132,
"end": 139,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Cross-category PWR",
"sec_num": "3.2.4"
},
{
"text": "We investigated the language use on Chinese social media regarding to the social identities of speakers and listeners. Specifically, we aim to explore whether the listener's social identities impact the responder's language use and whether such differences are separable. The primary answers to both of these questions are \"Yes\" on the basis of our experiments and, additionally, by conducting pivot word analysis, we also found that mf and fm are less separable owing to the linguistic matching phenomenon. This raises as open question of which style categorisation scheme (i.e., whether to distinguishing mf and fm or not) is better for modelling personalised dialogues.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4"
},
{
"text": "We then trained personalised response generators which take the social identities of listeners into account. To conduct insightful analysis, we design a number of new metrics with the help of the speaking style classifiers and the extracted pivot words. The outcomes show that modelling listener's identity assists the dialogue system to express more of its own identity. However, our system failed to model the style of mf/fm, which suggests the necessity of disassociating the style between mf and fm.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4"
},
{
"text": "Note that our work focus mainly on the gender, which from our perspective, underlies further studies on investigating the influence of other listener's social variables, such as age or location, or even of listener's persona as a whole. Likewise, since we study only on data from Chinese social media, it is also worth to validate whether our findings still hold in multilingual platforms like Twitter. As for the designing of dialogue systems, we highlighted the importance of modelling listener's persona for the Chatbot to express its own personality, it is also worthwhile to evaluate the built system in other angles, such as relevance and fluency, or to validate whether the resulting chat machine is empathetic (Fung et al., 2018) or not.",
"cite_spans": [
{
"start": 718,
"end": 737,
"text": "(Fung et al., 2018)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4"
},
{
"text": "Our decision on using single turn dialogue also limits the generalisability of our conclusion to real conversations since the assimilation of each others style may progress in the course of a dialogue. This may result in under-estimating the effect of the linguistic matching between speakers and listeners. In future, we will extend our work into multi-turn dialogue modelling.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4"
},
{
"text": "In this paper, we use the gender as an example of social identity to understand how the speaking style of a speaker is influenced. To this end, we build gender classifiers and stylised dialogue systems. In light of the discussion in Larson (2017) , gender is notoriously difficult to detect (Buolamwini and Gebru, 2018) , and mis-gendering individuals is harmful to users (Keyes, 2018) . Therefore, we are not and will not apply or extend the built classifiers and dialogue systems into real applications. We hope our findings could help with further works on mitigating gender bias (Liu et al., 2020) or improving fairness in dialogue systems.",
"cite_spans": [
{
"start": 233,
"end": 246,
"text": "Larson (2017)",
"ref_id": "BIBREF16"
},
{
"start": 291,
"end": 319,
"text": "(Buolamwini and Gebru, 2018)",
"ref_id": "BIBREF1"
},
{
"start": 372,
"end": 385,
"text": "(Keyes, 2018)",
"ref_id": "BIBREF12"
},
{
"start": 583,
"end": 601,
"text": "(Liu et al., 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Ethical Statement",
"sec_num": "5"
},
{
"text": "Weibo is the largest Chinese social media.3 We use fm to represent the style used by a male speaker when talking to a female listener. Similar definition applies to mf, mm, and ff.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Chinese as a discourse based language, pro-drop(Huang, 1984) is much more common than that in, for example, English, especially when the dropped pronoun referring to one of the speakers in a conversation(Chen et al., 2018).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank the anonymous reviewers for their helpful comments. Guanyi Chen is supported by China Scholarship Council (No.201907720022).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Gender identity and lexical variation in social media",
"authors": [
{
"first": "David",
"middle": [],
"last": "Bamman",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Eisenstein",
"suffix": ""
},
{
"first": "Tyler",
"middle": [],
"last": "Schnoebelen",
"suffix": ""
}
],
"year": 2014,
"venue": "Journal of Sociolinguistics",
"volume": "18",
"issue": "2",
"pages": "135--160",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Bamman, Jacob Eisenstein, and Tyler Schnoe- belen. 2014. Gender identity and lexical varia- tion in social media. Journal of Sociolinguistics, 18(2):135-160.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Gender shades: Intersectional accuracy disparities in commercial gender classification",
"authors": [
{
"first": "Joy",
"middle": [],
"last": "Buolamwini",
"suffix": ""
},
{
"first": "Timnit",
"middle": [],
"last": "Gebru",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 1st Conference on Fairness, Accountability and Transparency",
"volume": "81",
"issue": "",
"pages": "77--91",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joy Buolamwini and Timnit Gebru. 2018. Gender shades: Intersectional accuracy disparities in com- mercial gender classification. In Proceedings of the 1st Conference on Fairness, Accountability and Transparency, volume 81 of Proceedings of Ma- chine Learning Research, pages 77-91, New York, NY, USA. PMLR.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Modelling pro-drop with the rational speech acts model",
"authors": [
{
"first": "Guanyi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Chenghua",
"middle": [],
"last": "Kees Van Deemter",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 11th International Conference on Natural Language Generation",
"volume": "",
"issue": "",
"pages": "159--164",
"other_ids": {
"DOI": [
"10.18653/v1/W18-6519"
]
},
"num": null,
"urls": [],
"raw_text": "Guanyi Chen, Kees van Deemter, and Chenghua Lin. 2018. Modelling pro-drop with the rational speech acts model. In Proceedings of the 11th International Conference on Natural Language Generation, pages 159-164, Tilburg University, The Netherlands. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Jason Weston, Douwe Kiela, and Adina Williams",
"authors": [
{
"first": "Emily",
"middle": [],
"last": "Dinan",
"suffix": ""
},
{
"first": "Angela",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "Ledell",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Emily Dinan, Angela Fan, Ledell Wu, Jason We- ston, Douwe Kiela, and Adina Williams. 2020.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Multi-dimensional gender bias classification",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2005.00614"
]
},
"num": null,
"urls": [],
"raw_text": "Multi-dimensional gender bias classification. arXiv preprint arXiv:2005.00614.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Rethinking text attribute transfer: A lexical analysis",
"authors": [
{
"first": "Yao",
"middle": [],
"last": "Fu",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Jiaze",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Lei",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 12th International Conference on Natural Language Generation",
"volume": "",
"issue": "",
"pages": "24--33",
"other_ids": {
"DOI": [
"10.18653/v1/W19-8604"
]
},
"num": null,
"urls": [],
"raw_text": "Yao Fu, Hao Zhou, Jiaze Chen, and Lei Li. 2019. Re- thinking text attribute transfer: A lexical analysis. In Proceedings of the 12th International Conference on Natural Language Generation, pages 24-33, Tokyo, Japan. Association for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Empathetic dialog systems",
"authors": [
{
"first": "Pascale",
"middle": [],
"last": "Fung",
"suffix": ""
},
{
"first": "Dario",
"middle": [],
"last": "Bertero",
"suffix": ""
},
{
"first": "Peng",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Ji",
"middle": [
"Ho"
],
"last": "Park",
"suffix": ""
},
{
"first": "Chien-Sheng",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Andrea",
"middle": [],
"last": "Madotto",
"suffix": ""
}
],
"year": 2018,
"venue": "The international conference on language resources and evaluation. European Language Resources Association",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pascale Fung, Dario Bertero, Peng Xu, Ji Ho Park, Chien-Sheng Wu, and Andrea Madotto. 2018. Em- pathetic dialog systems. In The international con- ference on language resources and evaluation. Eu- ropean Language Resources Association.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Stylometric analysis of bloggers' age and gender",
"authors": [
{
"first": "Sumit",
"middle": [],
"last": "Goswami",
"suffix": ""
},
{
"first": "Sudeshna",
"middle": [],
"last": "Sarkar",
"suffix": ""
},
{
"first": "Mayur",
"middle": [],
"last": "Rustagi",
"suffix": ""
}
],
"year": 2009,
"venue": "Third international AAAI conference on weblogs and social media",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sumit Goswami, Sudeshna Sarkar, and Mayur Rustagi. 2009. Stylometric analysis of bloggers' age and gen- der. In Third international AAAI conference on we- blogs and social media.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural computation",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Generating natural language under pragmatic constraints",
"authors": [
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 1987,
"venue": "Journal of Pragmatics",
"volume": "11",
"issue": "6",
"pages": "689--719",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eduard Hovy. 1987. Generating natural language un- der pragmatic constraints. Journal of Pragmatics, 11(6):689-719.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "On the distribution and reference of empty pronouns",
"authors": [
{
"first": "C-T James",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 1984,
"venue": "Linguistic inquiry",
"volume": "",
"issue": "",
"pages": "531--574",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C-T James Huang. 1984. On the distribution and refer- ence of empty pronouns. Linguistic inquiry, pages 531-574.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Bag of tricks for efficient text classification",
"authors": [
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 15th Conference of the European Chapter",
"volume": "2",
"issue": "",
"pages": "427--431",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. 2017. Bag of tricks for efficient text classification. In Proceedings of the 15th Con- ference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Pa- pers, pages 427-431. Association for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "The misgendering machines",
"authors": [
{
"first": "Os",
"middle": [],
"last": "Keyes",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1145/3274357"
]
},
"num": null,
"urls": [],
"raw_text": "Os Keyes. 2018. The misgendering machines:",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Trans/hci implications of automatic gender recognition",
"authors": [],
"year": null,
"venue": "Proc. ACM Hum.-Comput. Interact",
"volume": "2",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1145/3274357"
]
},
"num": null,
"urls": [],
"raw_text": "Trans/hci implications of automatic gender recog- nition. Proc. ACM Hum.-Comput. Interact., 2(CSCW).",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Convolutional neural networks for sentence classification",
"authors": [
{
"first": "Yoon",
"middle": [],
"last": "Kim",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1746--1751",
"other_ids": {
"DOI": [
"10.3115/v1/D14-1181"
]
},
"num": null,
"urls": [],
"raw_text": "Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1746-1751, Doha, Qatar. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Exploring personalized neural conversational models",
"authors": [
{
"first": "Satwik",
"middle": [],
"last": "Kottur",
"suffix": ""
},
{
"first": "Xiaoyu",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Vitor",
"middle": [],
"last": "Carvalho",
"suffix": ""
}
],
"year": 2017,
"venue": "Twenty-Sixth International Joint Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "3728--3734",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Satwik Kottur, Xiaoyu Wang, and Vitor Carvalho. 2017. Exploring personalized neural conversational mod- els. In Twenty-Sixth International Joint Conference on Artificial Intelligence, pages 3728-3734.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Gender as a variable in naturallanguage processing: Ethical considerations",
"authors": [
{
"first": "Brian",
"middle": [],
"last": "Larson",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the First ACL Workshop on Ethics in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1--11",
"other_ids": {
"DOI": [
"10.18653/v1/W17-1601"
]
},
"num": null,
"urls": [],
"raw_text": "Brian Larson. 2017. Gender as a variable in natural- language processing: Ethical considerations. In Pro- ceedings of the First ACL Workshop on Ethics in Natural Language Processing, pages 1-11, Valencia, Spain. Association for Computational Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "A diversity-promoting objective function for neural conversation models",
"authors": [
{
"first": "Jiwei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Michel",
"middle": [],
"last": "Galley",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Brockett",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Bill",
"middle": [],
"last": "Dolan",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "110--119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016a. A diversity-promoting objec- tive function for neural conversation models. In Pro- ceedings of the 2016 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 110-119.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "A persona-based neural conversation model",
"authors": [
{
"first": "Jiwei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Michel",
"middle": [],
"last": "Galley",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Brockett",
"suffix": ""
},
{
"first": "Georgios",
"middle": [],
"last": "Spithourakis",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Bill",
"middle": [],
"last": "Dolan",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "994--1003",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiwei Li, Michel Galley, Chris Brockett, Georgios Sp- ithourakis, Jianfeng Gao, and Bill Dolan. 2016b. A persona-based neural conversation model. In Pro- ceedings of the 54th Annual Meeting of the Associa- tion for Computational Linguistics (Volume 1: Long Papers), pages 994-1003.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "DGST: a dual-generator network for text style transfer",
"authors": [
{
"first": "Xiao",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Guanyi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Chenghua",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Ruizhe",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiao Li, Guanyi Chen, Chenghua Lin, and Ruizhe Li. 2020. DGST: a dual-generator network for text style transfer. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Does gender matter? towards fairness in dialogue systems",
"authors": [
{
"first": "Haochen",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Jamell",
"middle": [],
"last": "Dacon",
"suffix": ""
},
{
"first": "Wenqi",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "Hui",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Zitao",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Jiliang",
"middle": [],
"last": "Tang",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1910.10486"
]
},
"num": null,
"urls": [],
"raw_text": "Haochen Liu, Jamell Dacon, Wenqi Fan, Hui Liu, Zitao Liu, and Jiliang Tang. 2019. Does gender matter? towards fairness in dialogue systems. arXiv preprint arXiv:1910.10486.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Zitao Liu, and Jiliang Tang. 2020. Mitigating gender bias for neural dialogue generation with adversarial learning",
"authors": [
{
"first": "Haochen",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Wentao",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yiqi",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Hui",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2009.13028"
]
},
"num": null,
"urls": [],
"raw_text": "Haochen Liu, Wentao Wang, Yiqi Wang, Hui Liu, Zi- tao Liu, and Jiliang Tang. 2020. Mitigating gender bias for neural dialogue generation with adversarial learning. arXiv preprint arXiv:2009.13028.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Key fact as pivot: A two-stage model for low resource table-to-text generation",
"authors": [
{
"first": "Shuming",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Pengcheng",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Tianyu",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Peng",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Jie",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Xu",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "2047--2057",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1197"
]
},
"num": null,
"urls": [],
"raw_text": "Shuming Ma, Pengcheng Yang, Tianyu Liu, Peng Li, Jie Zhou, and Xu Sun. 2019. Key fact as pivot: A two-stage model for low resource table-to-text gen- eration. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguis- tics, pages 2047-2057, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "how old do you think i am",
"authors": [
{
"first": "Dong",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Rilana",
"middle": [],
"last": "Gravel",
"suffix": ""
},
{
"first": "Dolf",
"middle": [],
"last": "Trieschnigg",
"suffix": ""
},
{
"first": "Theo",
"middle": [],
"last": "Meder",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dong Nguyen, Rilana Gravel, Dolf Trieschnigg, and Theo Meder. 2013. \"how old do you think i am?\";",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "a study of language and age in twitter",
"authors": [],
"year": null,
"venue": "Proceedings of the seventh international AAAI conference on weblogs and social media",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "a study of language and age in twitter. In Proceed- ings of the seventh international AAAI conference on weblogs and social media. AAAI Press.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Author age prediction from text using linear regression",
"authors": [
{
"first": "Dong",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Noah",
"suffix": ""
},
{
"first": "Carolyn",
"middle": [],
"last": "Smith",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Rose",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 5th ACL-HLT workshop on language technology for cultural heritage, social sciences, and humanities",
"volume": "",
"issue": "",
"pages": "115--123",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dong Nguyen, Noah A Smith, and Carolyn Rose. 2011. Author age prediction from text using linear regres- sion. In Proceedings of the 5th ACL-HLT workshop on language technology for cultural heritage, social sciences, and humanities, pages 115-123.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Why gender and age prediction from tweets is hard: Lessons from a crowdsourcing experiment",
"authors": [
{
"first": "Dong",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Dolf",
"middle": [],
"last": "Trieschnigg",
"suffix": ""
},
{
"first": "Rilana",
"middle": [],
"last": "Seza Dogru\u00f6z",
"suffix": ""
},
{
"first": "Mari\u00ebt",
"middle": [],
"last": "Gravel",
"suffix": ""
},
{
"first": "Theo",
"middle": [],
"last": "Theune",
"suffix": ""
},
{
"first": "Franciska",
"middle": [
"De"
],
"last": "Meder",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Jong",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers",
"volume": "",
"issue": "",
"pages": "1950--1961",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dong Nguyen, Dolf Trieschnigg, A Seza Dogru\u00f6z, Ri- lana Gravel, Mari\u00ebt Theune, Theo Meder, and Fran- ciska De Jong. 2014. Why gender and age predic- tion from tweets is hard: Lessons from a crowdsourc- ing experiment. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers, pages 1950-1961.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Linguistic style matching in social interaction",
"authors": [
{
"first": "G",
"middle": [],
"last": "Kate",
"suffix": ""
},
{
"first": "James",
"middle": [
"W"
],
"last": "Niederhoffer",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Pennebaker",
"suffix": ""
}
],
"year": 2002,
"venue": "Journal of Language and Social Psychology",
"volume": "21",
"issue": "4",
"pages": "337--360",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kate G Niederhoffer and James W Pennebaker. 2002. Linguistic style matching in social interaction. Jour- nal of Language and Social Psychology, 21(4):337- 360.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Bleu: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th annual meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th annual meeting of the Association for Compu- tational Linguistics, pages 311-318.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Assigning personality/identity to a chatting machine for coherent conversation generation",
"authors": [
{
"first": "Qiao",
"middle": [],
"last": "Qian",
"suffix": ""
},
{
"first": "Minlie",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Haizhou",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Jingfang",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Xiaoyan",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1706.02861"
]
},
"num": null,
"urls": [],
"raw_text": "Qiao Qian, Minlie Huang, Haizhou Zhao, Jingfang Xu, and Xiaoyan Zhu. 2017. Assigning per- sonality/identity to a chatting machine for co- herent conversation generation. arXiv preprint arXiv:1706.02861.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Language models are unsupervised multitask learners",
"authors": [
{
"first": "Alec",
"middle": [],
"last": "Radford",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Rewon",
"middle": [],
"last": "Child",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Luan",
"suffix": ""
},
{
"first": "Dario",
"middle": [],
"last": "Amodei",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
}
],
"year": 2019,
"venue": "OpenAI Blog",
"volume": "1",
"issue": "8",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Blog, 1(8):9.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Effects of age and gender on blogging",
"authors": [
{
"first": "Jonathan",
"middle": [],
"last": "Schler",
"suffix": ""
},
{
"first": "Moshe",
"middle": [],
"last": "Koppel",
"suffix": ""
},
{
"first": "Shlomo",
"middle": [],
"last": "Argamon",
"suffix": ""
},
{
"first": "James",
"middle": [
"W"
],
"last": "Pennebaker",
"suffix": ""
}
],
"year": 2006,
"venue": "AAAI spring symposium: Computational approaches to analyzing weblogs",
"volume": "6",
"issue": "",
"pages": "199--205",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonathan Schler, Moshe Koppel, Shlomo Argamon, and James W Pennebaker. 2006. Effects of age and gender on blogging. In AAAI spring sympo- sium: Computational approaches to analyzing we- blogs, volume 6, pages 199-205.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information pro- cessing systems, pages 5998-6008.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Dialogue state tracking with pretrained encoder for multi-domain trask-oriented dialogue systems",
"authors": [
{
"first": "Dingmin",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Chenghua",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Zhong",
"suffix": ""
},
{
"first": "Kam-Fai",
"middle": [],
"last": "Wong",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2004.10663"
]
},
"num": null,
"urls": [],
"raw_text": "Dingmin Wang, Chenghua Lin, Li Zhong, and Kam- Fai Wong. 2020a. Dialogue state tracking with pre- trained encoder for multi-domain trask-oriented dia- logue systems. arXiv preprint arXiv:2004.10663.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "A large-scale chinese short-text conversation dataset",
"authors": [
{
"first": "Yida",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Pei",
"middle": [],
"last": "Ke",
"suffix": ""
},
{
"first": "Yinhe",
"middle": [],
"last": "Zheng",
"suffix": ""
},
{
"first": "Kaili",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Yong",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Xiaoyan",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Minlie",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2020,
"venue": "NLPCC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yida Wang, Pei Ke, Yinhe Zheng, Kaili Huang, Yong Jiang, Xiaoyan Zhu, and Minlie Huang. 2020b. A large-scale chinese short-text conversation dataset. In NLPCC.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Perceived dimensions of interpersonal relations",
"authors": [
{
"first": "Myron",
"middle": [],
"last": "Wish",
"suffix": ""
},
{
"first": "Morton",
"middle": [],
"last": "Deutsch",
"suffix": ""
},
{
"first": "Susan",
"middle": [
"J"
],
"last": "Kaplan",
"suffix": ""
}
],
"year": 1976,
"venue": "Journal of Personality and social Psychology",
"volume": "33",
"issue": "4",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Myron Wish, Morton Deutsch, and Susan J Kaplan. 1976. Perceived dimensions of interpersonal rela- tions. Journal of Personality and social Psychology, 33(4):409.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Transfertransfo: A transfer learning approach for neural network based conversational agents",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Clement",
"middle": [],
"last": "Delangue",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1901.08149"
]
},
"num": null,
"urls": [],
"raw_text": "Thomas Wolf, Victor Sanh, Julien Chaumond, and Clement Delangue. 2019. Transfertransfo: A transfer learning approach for neural network based conversational agents. arXiv preprint arXiv:1901.08149.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Personalizing dialogue agents: I have a dog, do you have pets too?",
"authors": [
{
"first": "Saizheng",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Emily",
"middle": [],
"last": "Dinan",
"suffix": ""
},
{
"first": "Jack",
"middle": [],
"last": "Urbanek",
"suffix": ""
},
{
"first": "Arthur",
"middle": [],
"last": "Szlam",
"suffix": ""
},
{
"first": "Douwe",
"middle": [],
"last": "Kiela",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "2204--2213",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Saizheng Zhang, Emily Dinan, Jack Urbanek, Arthur Szlam, Douwe Kiela, and Jason Weston. 2018. Per- sonalizing dialogue agents: I have a dog, do you have pets too? In Proceedings of the 56th Annual Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers), pages 2204-2213. Association for Computational Linguistics.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Personalized dialogue generation with diversified traits",
"authors": [
{
"first": "Yinhe",
"middle": [],
"last": "Zheng",
"suffix": ""
},
{
"first": "Guanyi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Minlie",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Song",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Xuan",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yinhe Zheng, Guanyi Chen, Minlie Huang, Song Liu, and Xuan Zhu. 2019. Personalized dialogue genera- tion with diversified traits. CoRR, abs/1901.09672.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Stylized dialogue response generation using stylized unpaired texts",
"authors": [
{
"first": "Yinhe",
"middle": [],
"last": "Zheng",
"suffix": ""
},
{
"first": "Zikai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Rongsheng",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Shilei",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Xiaoxi",
"middle": [],
"last": "Mao",
"suffix": ""
},
{
"first": "Minlie",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2009.12719"
]
},
"num": null,
"urls": [],
"raw_text": "Yinhe Zheng, Zikai Chen, Rongsheng Zhang, Shilei Huang, Xiaoxi Mao, and Minlie Huang. 2020a. Styl- ized dialogue response generation using stylized un- paired texts. arXiv preprint arXiv:2009.12719.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "A pre-training based personalized dialogue generation model with persona-sparse data",
"authors": [
{
"first": "Yinhe",
"middle": [],
"last": "Zheng",
"suffix": ""
},
{
"first": "Rongsheng",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Minlie",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Xiaoxi",
"middle": [],
"last": "Mao",
"suffix": ""
}
],
"year": 2020,
"venue": "AAAI",
"volume": "",
"issue": "",
"pages": "9693--9700",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yinhe Zheng, Rongsheng Zhang, Minlie Huang, and Xiaoxi Mao. 2020b. A pre-training based personal- ized dialogue generation model with persona-sparse data. In AAAI, pages 9693-9700.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Emotional chatting machine: Emotional conversation generation with internal and external memory",
"authors": [
{
"first": "Hao",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Minlie",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Tianyang",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Xiaoyan",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2018,
"venue": "AAAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hao Zhou, Minlie Huang, Tianyang Zhang, Xiaoyan Zhu, and Bing Liu. 2018. Emotional chatting ma- chine: Emotional conversation generation with inter- nal and external memory. In AAAI.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"text": "Average response length of each style.",
"num": null,
"type_str": "figure"
},
"FIGREF1": {
"uris": null,
"text": "Confusion matrix of the 4-way gender-pair classification using fastText.",
"num": null,
"type_str": "figure"
},
"FIGREF2": {
"uris": null,
"text": "), \u82f9\u679c (Apple), \u4e09\u661f (Samsung), \u5c0f\u7c73 (Xiaomi), \u7f8e\u56fd (America), \u65e5\u672c (Japan), \u4e2d\u56fd (China), \u5927\u9646 (Mainland), \u53f0\u6e7e (Taiwan),\"\uff0c\", \"\u3002\" fm \u6e38\u620f (game), \u738b\u8005 (Honer of Kings), \u65e9\u5b89 (good morning), \u665a\u5b89 (good night), \u62cd\u7167 (photograph), \u8bfb\u4e66 (reading), \u5de5\u4f5c (working), \u6211 (I), \u4f60 (you) mf \u5927\u53d4 (Uncle), \u5f1f\u5f1f (little Brother), \u54e5\u54e5 (elder Brother), \u4e0a\u73ed (Working), \u559d\u9152 (Drinking), \u53a6\u95e8 (Xiamen), \u5e7f \u4e1c (Guangdong), \u5e7f\u5dde (Guangzhou), \u55ef\u55ef (Uh-huh), \u6211 (I), \u4f60 (you), \"\u223c\u223c\", \"!!!!\", \"???\" ff \u738b\u4fca\u51ef (a celebrity), \u6613\u70ca\u5343\u73ba (a celebrity), \u9e7f\u6657 (a celebrity), KPop, \u7537\u4e3b (leading actor), \u7535\u89c6\u5267 (teleplay), \u5316\u5986 (make up), \u6f02\u4eae (beauty), \u88d9\u5b50 (skirt), \u4fbf\u5b9c (cheap), \u6dd8\u5b9d (Taobao)\uff0c \u55ef\u55ef (Uh-huh), \u554a\u554a\u554a (Ah Ah Ah), \u6211 (I), \u4f60 (you), \"\u223c\u223c\u223c\u223c\", \"!!??\", \"!!!!\" male \u534e\u4e3a (Huawei), \u82f9\u679c (Apple), \u7f8e\u56fd (America), \u5927\u9646 (Mainland), \u53f0\u6e7e (Taiwan), \u59b9\u5b50 (girl), \u5ab3\u5987 (wife), \u6e38\u620f (game), \"\uff0c\", \"\u3002\"female \u738b\u4fca\u51ef (a celebrity), \u6613\u70ca\u5343\u73ba (a celebrity), \u7537\u4e3b (leading actor), \u7535\u89c6\u5267 (teleplay), \u5316\u5986 (make up), \u88d9\u5b50 (skirt), \u9762\u819c (mask), \u5218\u6d77 (bang), \u6211 (I), \"\u223c\u223c\", \"!!!!\", \"\u223c\u223c\u223c\u223c\", hhh, QAQ, mua",
"num": null,
"type_str": "figure"
},
"FIGREF3": {
"uris": null,
"text": "Illustration of our personalised response generator.",
"num": null,
"type_str": "figure"
},
"TABREF1": {
"content": "<table><tr><td>2:</td><td>Predict label\u0177 and confidence p for x</td></tr><tr><td>3:</td><td>if\u0177 = y then</td></tr><tr><td>4:</td><td>for each word type t in x do</td></tr><tr><td>5:</td><td>Construct x \\t by removing all t in x</td></tr><tr><td>6:</td><td>Predict label\u0177 and confidence p for</td></tr><tr><td/><td>x \\t</td></tr><tr><td>7:</td><td>if\u0177 = y or p \u2212 p &gt; \u03b2 then</td></tr><tr><td>8:</td><td>Add t to \u2126 c and add pivot word fre-</td></tr><tr><td/><td>quency p(t, s) by 1</td></tr><tr><td>9:</td><td>end if</td></tr><tr><td>10:</td><td>end for</td></tr><tr><td>11:</td><td/></tr></table>",
"text": "Algorithm 1 Classifier-based Pivot Word Discovery Input: Dataset D, Style Set S, BOW Classifier f , Confidence Threshold \u03b1, and Word Pivot Frequency Threshold \u03b2. Output: A set of Pivot Words \u2126 1: for each input sentence x and corresponding label y = s \u2208 S in D do",
"html": null,
"type_str": "table",
"num": null
},
"TABREF2": {
"content": "<table><tr><td>mm</td><td>mf</td><td>fm</td><td>ff</td><td>male</td><td>female</td></tr><tr><td colspan=\"6\">mm 0.07 (-0.70) 0.96 (+0.19) 0.99 (+0.22) 0.98 (+0.21) 0.12 (-0.65) 0.99 (+0.22)</td></tr><tr><td colspan=\"6\">mf 0.72 (+0.19) 0.00 (-0.53) 0.23 (-0.30) 0.01 (-0.52) 0.41 (-0.12) 0.00 (-0.53)</td></tr><tr><td colspan=\"6\">fm 0.27 (-0.25) 0.31 (-0.21) 0.02 (-0.50) 0.19 (-0.33) 0.04 (-0.48) 0.11 (-0.41)</td></tr><tr><td colspan=\"6\">ff 0.79 (+0.05) 0.10 (-0.64) 0.21 (-0.53) 0.00 (-0.74) 0.94 (+0.20) 0.00 (-0.74)</td></tr></table>",
"text": "Lists of extracted pivot words in each categories of the gender classifier and the gender-pair classifier.",
"html": null,
"type_str": "table",
"num": null
},
"TABREF3": {
"content": "<table/>",
"text": "",
"html": null,
"type_str": "table",
"num": null
},
"TABREF5": {
"content": "<table><tr><td>mm</td><td>mf</td><td>fm</td><td>ff</td><td colspan=\"2\">male female</td></tr><tr><td colspan=\"5\">mm 57.01 51.24 51.82 46.09 46.85</td><td>39.07</td></tr><tr><td colspan=\"5\">mf 58.60 59.12 63.13 58.00 46.85</td><td>50.91</td></tr><tr><td colspan=\"5\">fm 54.14 55.66 59.22 55.87 44.96</td><td>45.90</td></tr><tr><td colspan=\"5\">ff 68.79 70.17 72.63 76.87 57.56</td><td>66.97</td></tr></table>",
"text": "Evaluation results of response generator with different speaking style categorisation scheme by means of metrics introduced in section 3.2.",
"html": null,
"type_str": "table",
"num": null
}
}
}
}