ACL-OCL / Base_JSON /prefixA /json /alw /2020.alw-1.8.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
raw
history blame contribute delete
No virus
55.3 kB
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T02:11:47.152652Z"
},
"title": "Reducing Unintended Identity Bias in Russian Hate Speech Detection",
"authors": [
{
"first": "Nadezhda",
"middle": [],
"last": "Zueva",
"suffix": "",
"affiliation": {
"laboratory": "VK Lab",
"institution": "",
"location": {}
},
"email": ""
},
{
"first": "Madina",
"middle": [],
"last": "Kabirova",
"suffix": "",
"affiliation": {
"laboratory": "VK Lab",
"institution": "",
"location": {}
},
"email": ""
},
{
"first": "Pavel",
"middle": [],
"last": "Kalaidin",
"suffix": "",
"affiliation": {
"laboratory": "VK Lab",
"institution": "",
"location": {}
},
"email": ""
},
{
"first": "",
"middle": [],
"last": "Vk",
"suffix": "",
"affiliation": {
"laboratory": "VK Lab",
"institution": "",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Toxicity has become a grave problem for many online communities and has been growing across many languages, including Russian. Hate speech creates an environment of intimidation, discrimination, and may even incite some real-world violence. Both researchers and social platforms have been focused on developing models to detect toxicity in online communication for a while now. A common problem of these models is the presence of bias towards some words (e.g. woman, black, jew or \u0436\u0435\u043d\u0449\u0438\u043d\u0430, \u0447\u0435\u0440\u043d\u044b\u0439, \u0435\u0432\u0440\u0435\u0439) that are not toxic, but serve as triggers for the classifier due to model caveats. In this paper, we describe our efforts towards classifying hate speech in Russian, and propose simple techniques of reducing unintended bias, such as generating training data with language models using terms and words related to protected identities as context and applying word dropout to such words.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Toxicity has become a grave problem for many online communities and has been growing across many languages, including Russian. Hate speech creates an environment of intimidation, discrimination, and may even incite some real-world violence. Both researchers and social platforms have been focused on developing models to detect toxicity in online communication for a while now. A common problem of these models is the presence of bias towards some words (e.g. woman, black, jew or \u0436\u0435\u043d\u0449\u0438\u043d\u0430, \u0447\u0435\u0440\u043d\u044b\u0439, \u0435\u0432\u0440\u0435\u0439) that are not toxic, but serve as triggers for the classifier due to model caveats. In this paper, we describe our efforts towards classifying hate speech in Russian, and propose simple techniques of reducing unintended bias, such as generating training data with language models using terms and words related to protected identities as context and applying word dropout to such words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "With the ever-growing popularity of social media, there is an immense amount of user-generated online content (e.g. as of May 2019, approximately 30,000 hours worth of videos are uploaded to YouTube every hour 1 ). In particular, there has been an exponential increase in user-generated texts such as comments, blog posts, status updates, messages, forum threads, etc. The low entry threshold and relative anonymity of the Internet have resulted not only in the exchange of information and content but also in the rise of trolling, hate speech, and overall toxicity 2 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Harassment is a pervasive issue for most online communities. A Pew survey conducted in 2014 3 found that 73% of Internet users have witnessed online harassment, and 40% have personally experienced it.",
"cite_spans": [
{
"start": 92,
"end": 93,
"text": "3",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Explicit policies against hate speech can be considered an industry standard 4 across social platforms, including platforms popular among Russianspeaking users (e.g. VK, the largest social network in Russia and the CIS 5 ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The study of hate speech, in online communication in particular, has been gaining traction in Russia for a while now due to it being a prevalent issue long before the Internet (Lokshina, 2003) . The number of competitions and workshops (e.g. HASOC at FIRE-2019; TRAC 2020; HatEval and OffensEval at SemEval-2019) on the topic of hate speech and toxic language detection reflect the scale of the situation. Social platforms utilize a wide variety of models to detect or classify hate speech. However, the majority of existing models operate with a bias in their predictions. They tend to classify comments mentioning certain commonly harassed identities (e.g. containing words such as woman, black, jew or \u0436\u0435\u043d\u0449\u0438\u043d\u0430, \u0447\u0435\u0440\u043d\u044b\u0439, \u0435\u0432\u0440\u0435\u0439) as toxic, while the comment itself may lack any actual toxicity. Identity terms of frequently targeted social groups have higher toxicity scores since they are found more often in abusive and toxic comments than terms related to other social groups. If the data used to train a machine learning model is skewed towards these words, the resulting model is likely to adopt this bias 6 .",
"cite_spans": [
{
"start": 176,
"end": 192,
"text": "(Lokshina, 2003)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Inappropriately high toxicity scores of terms related to specific social groups can potentially negate the benefits of using machine learning models to fight the spread of hate speech. This motivated us to work towards reducing these biases. In 66 this paper, our main goal is to reduce the false toxicity scores of non-toxic comments that include identity terms empirically known to introduce model bias.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Little research has been done on the automatic detection of toxicity and hate speech in the Russian language. Potapova and Gordeev (2016) used convolutional neural networks to detect aggression in user messages on anonymous message boards. Andrusyak et al. (2018) proposed an unsupervised technique for extending the vocabulary of abusive and obscene words in Russian and Ukrainian. More recently, Smetanin (2020) utilized pre-trained BERT (Devlin et al., 2019) and Universal Sentence Encoder (Yang et al., 2019) architectures to classify toxic Russian-language content.",
"cite_spans": [
{
"start": 110,
"end": 137,
"text": "Potapova and Gordeev (2016)",
"ref_id": "BIBREF10"
},
{
"start": 240,
"end": 263,
"text": "Andrusyak et al. (2018)",
"ref_id": "BIBREF0"
},
{
"start": 440,
"end": 461,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF4"
},
{
"start": 493,
"end": 512,
"text": "(Yang et al., 2019)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Hate Speech Detection in Russian",
"sec_num": "2.1"
},
{
"text": "Dixon et al. 2018introduced Pinned AUC to control for unintended bias. In this paper, we adopt Generalized Mean of Bias AUCs (GMB-AUC) introduced by (Borkan et al., 2019b) , following a study by (Borkan et al., 2019a) showing the limitations of Pinned AUC. Vaidya et al. (2020) proposed a model that learns to predict the toxicity of a comment, as well as the protected identities present, in order to reduce unintended bias as shown by an increase in Generalized Mean of Bias AUCs. Nozza et al. (2019) focused on misogyny detection, providing a synthetic test for evaluating bias and some mitigation strategies for it.",
"cite_spans": [
{
"start": 149,
"end": 171,
"text": "(Borkan et al., 2019b)",
"ref_id": "BIBREF2"
},
{
"start": 195,
"end": 217,
"text": "(Borkan et al., 2019a)",
"ref_id": "BIBREF1"
},
{
"start": 257,
"end": 277,
"text": "Vaidya et al. (2020)",
"ref_id": "BIBREF14"
},
{
"start": 483,
"end": 502,
"text": "Nozza et al. (2019)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Reducing Unintended Bias",
"sec_num": "2.2"
},
{
"text": "To our knowledge, there is no published research on reducing text classification bias in Russian.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reducing Unintended Bias",
"sec_num": "2.2"
},
{
"text": "For our experiments, we manually collected a corpus 7 of comments posted on a major Russian social network. The mean length of each sample is 26 characters; samples over 50 characters (5% of the total number of samples) were shortened. The corpus consists of 100,000 samples that we randomly split into training, validation and test sets in the ratio 8:1:1. Each comment was assigned a label based on whether or not it contained various forms of hate speech or abuse, including threats, harassment, insults, mentions of family members, as well as language used to promote lookism, sexism, homophobia, nationalism, etc.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "3.1"
},
{
"text": "As benchmarks, we also used a small corpus of 2,000 samples in mixed Russian and Ukrainian collected by (Andrusyak et al., 2018) , and a corpus in Russian used by (Smetanin, 2020) (around 14,000 samples).",
"cite_spans": [
{
"start": 104,
"end": 128,
"text": "(Andrusyak et al., 2018)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "3.1"
},
{
"text": "We considered the prediction of labels related to hate speech as a task and validated performance using introduced Generalized Mean of Bias AUCs (Borkan et al., 2019b) to analyze whether or not the proposed methods help reduce text classification bias.",
"cite_spans": [
{
"start": 145,
"end": 167,
"text": "(Borkan et al., 2019b)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Task & Evaluation",
"sec_num": "3.2"
},
{
"text": "We manually compiled a list of Russian words related to protected identities. The words were split, based on the type of hate speech used, into the following classes: lookism, sexism, nationalism, threats, harassment, homophobia, and other. Extracts from the full list are provided in Table 1 . Total number of words in the list is 214. The full list of protected identities and related words is available here: https://vk.cc/aAS3TQ.",
"cite_spans": [],
"ref_spans": [
{
"start": 285,
"end": 292,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Protected Identities",
"sec_num": "3.3"
},
{
"text": "We used a model based on the self-attentive encoder (Lin et al., 2017) . We directly feed the token embeddings matrix to the attention layer instead of the bi-LSTM encoder, making it a pure selfattention model similar to the one used in Transformer (Vaswani et al., 2017) . An advantage of this architecture is that the individual attention weights for each input token can be interpretable (Lin et al., 2017) . This makes it possible to visualize what triggers the classifier, giving us an opportunity to explore the data and extend our list of protected identities. To overcome the problem of out-ofvocabulary words, we trained byte pair encoding (Sennrich et al., 2015 ) on a corpora of Russian subtitles taken from a large dataset collected by (Shavrina and Shapovalova) , and used it for input tokenization.",
"cite_spans": [
{
"start": 52,
"end": 70,
"text": "(Lin et al., 2017)",
"ref_id": "BIBREF7"
},
{
"start": 249,
"end": 271,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF15"
},
{
"start": 391,
"end": 409,
"text": "(Lin et al., 2017)",
"ref_id": "BIBREF7"
},
{
"start": 649,
"end": 671,
"text": "(Sennrich et al., 2015",
"ref_id": "BIBREF11"
},
{
"start": 748,
"end": 774,
"text": "(Shavrina and Shapovalova)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "3.4"
},
{
"text": "We also evaluated a CNN-based text classifier (as in (Potapova and Gordeev, 2016) ) to use as a baseline for comparison. lookism \u043a\u043e\u0440\u043e\u0432\u0430 korova \"cow\" \u043f\u044b\u0448\u043a\u0430 pishka \"donut (meaning \"plump\")\" sexism \u0436\u0435\u043d\u0449\u0438\u043d\u0430 zhenshchina \"woman\" \u0431\u0430\u0431\u0430 baba \"woman (derogatory)\" nationalism \u0447\u0435\u0445 chekh \"\"Chechen\" (derogatory) lit. \"Czech\"\" \u0435\u0432\u0440\u0435\u0439 evrei \"Jew\" threats \u0432\u044b\u0435\u0437\u0436\u0430\u0442\u044c vyezhat \"to come (after somebody)\" \u0430\u0439\u043f\u0438 aipi \"ip\" harassment \u043a\u0438\u0441\u043a\u0430 kiska \"pussy\" \u0441\u0435\u043a\u0441\u0438 seksi \"sexy\" homophobia \u0433\u0435\u0439 gay \"gay\" \u043b\u0433\u0431\u0442 LGBT \"",
"cite_spans": [
{
"start": 53,
"end": 81,
"text": "(Potapova and Gordeev, 2016)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "3.4"
},
{
"text": "LGBT\" other \u043c\u0430\u043c\u043a\u0430 mamka \"mother\" \u0430\u0434\u043c\u0438\u043d admin \"admin\" Table 1 : Extracts from the full list of protected identities and related words.",
"cite_spans": [],
"ref_spans": [
{
"start": 53,
"end": 60,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Models",
"sec_num": "3.4"
},
{
"text": "To reduce model bias, we propose to extend the dataset with the output of pre-trained language models. We used the pre-trained Transformer language model 8 trained on the Taiga dataset (Shavrina and Shapovalova). As Taiga contains 8 sources of normative Russian text (news, fairy tales, classic literature, etc.), we assumed that the model would be able to generate non-toxic comments even with one word from protected identities given as context. We took a random word from a list of protected identities and related words as a single word prefix for language generation, and generated samples up to 20 words long or until an end token was generated. An additional 25,000 samples were generated using the described approach and added to the existing training set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Generation with Language Models",
"sec_num": "3.5"
},
{
"text": "Random word dropout (Dai and Le, 2015) was shown to improve text classification. We utilized this technique to randomly (with 0.5 probability)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Identity Dropout",
"sec_num": "3.6"
},
{
"text": "8 https://github.com/vlarine/ruGPT2 replace protected identities in input sequences with the <UNK> token during training.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Identity Dropout",
"sec_num": "3.6"
},
{
"text": "Following (Vaidya et al., 2020) , we evaluated a multi-task learning framework, where we extended a base model by predicting a protected identity class from an input sequence. In our setup, the loss from an extra classifier head is weighted equal to the loss from the toxicity classifier.",
"cite_spans": [
{
"start": 10,
"end": 31,
"text": "(Vaidya et al., 2020)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-Task Learning",
"sec_num": "3.7"
},
{
"text": "We trained our models for 100,000 iterations with a batch size of 128, the Adam optimizer (Kingma and Ba, 2014), and a learning rate of 1e-5 with betas (0.9, 0.999) on a single NVIDIA Tesla T4 GPU. Each experiment took approximately 1 hour to run. We used embeddings pre-trained on the corpora of Russian subtitles (Shavrina and Shapovalova). We experimented with 2 different architectures (self-ATTN, CNN) in several scenarios by applying Data Generation with Language Model, Identity Dropout, and Multi-Task learning, as well as combining these approaches. We used binary crossentropy loss as the loss function for the single-task approach. As the loss function for Multi-Task learning, we used the average loss score between two tasks: predicting the toxicity score, and predicting the protected identity class. We trained our model on the training set, controlled the training process using the validation set, and evaluated metrics on the test set. We repeated each experiment 3 times and showed the mean and standard deviation values of the measurements. We applied an early stopping approach with patience level 50. The code is available on Google Drive 9 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Details",
"sec_num": "3.8"
},
{
"text": "The results are provided in Table 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 28,
"end": 35,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Results & Conclusion",
"sec_num": "4"
},
{
"text": "We showed that, for our dataset and for the benchmark from (Smetanin, 2020), adding an extra task of predicting the class of a protected identity can indeed improve the quality of toxicity classification in terms of reducing unintended bias. Moreover, we observed that simple techniques such as regularizing the input and extending the training data with external language models can help reduce unintended model bias on protected identities even further.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results & Conclusion",
"sec_num": "4"
},
{
"text": "Our Dataset (Andrusyak et al., 2018) For the (Andrusyak et al., 2018 ) benchmark, we did not see much improvement in our metrics. This can be attributed to language differences, as the benchmark contains abusive words both in Russian and Ukrainian.",
"cite_spans": [
{
"start": 12,
"end": 36,
"text": "(Andrusyak et al., 2018)",
"ref_id": "BIBREF0"
},
{
"start": 45,
"end": 68,
"text": "(Andrusyak et al., 2018",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results & Conclusion",
"sec_num": "4"
},
{
"text": "We also observed that the proposed models achieved competitive results across all three datasets when evaluated with F1 score. The best performing model (Attn + identity d/o + LM data + multitask setup) achieved an F1 score of 0.86 on the (Smetanin, 2020) benchmark, which is 93% of the reported SoTA performance of a much larger model fine-tuned from a BERT-like architecture.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results & Conclusion",
"sec_num": "4"
},
{
"text": "We are interested in automatically extending our compiled list of protected identities and related words. We also expect that fine-tuning a pre-trained BERT-like model would improve our results and plan to experiment with it.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Future Work",
"sec_num": "5"
},
{
"text": "https://vk.cc/aANMR4 2 https://vk.cc/aANMZn 3 https://vk.cc/aANN6p",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://vk.cc/aANNbQ 5 https://vk.cc/ayxecu 6 https://vk.cc/aANNqT",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The corpus is available on request to authors upon submitting a license agreement.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://vk.cc/aANO1g",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The authors are grateful to Daniil Gavrilov and Oktai Tatanov for useful discussions, Daniil Gavrilov for review, Viktoriia Loginova and David Prince for proofreading, and anonymous reviewers for valuable comments. The authors would also like to thank the VK Moderation Team (led by Katerina Egorushkova) for their help in building a hate speech dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": "6"
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Detection of abusive speech for mixed sociolects of russian and ukrainian languages",
"authors": [
{
"first": "Bohdan",
"middle": [],
"last": "Andrusyak",
"suffix": ""
},
{
"first": "Mykhailo",
"middle": [],
"last": "Rimel",
"suffix": ""
},
{
"first": "Roman",
"middle": [],
"last": "Kern",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of Recent Advances in Slavonic Natural Language Processing",
"volume": "",
"issue": "",
"pages": "77--84",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bohdan Andrusyak, Mykhailo Rimel, and Roman Kern. 2018. Detection of abusive speech for mixed soci- olects of russian and ukrainian languages. In Pro- ceedings of Recent Advances in Slavonic Natural Language Processing, page 77-84.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Limitations of pinned auc for measuring unintended bias",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Borkan",
"suffix": ""
},
{
"first": "Lucas",
"middle": [],
"last": "Dixon",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Sorensen",
"suffix": ""
},
{
"first": "Nithum",
"middle": [],
"last": "Thain",
"suffix": ""
},
{
"first": "Lucy",
"middle": [],
"last": "Vasserman",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXivpreprintarXiv:1903.02088"
]
},
"num": null,
"urls": [],
"raw_text": "Daniel Borkan, Lucas Dixon, John Li, Jeffrey Sorensen, Nithum Thain, and Lucy Vasserman. 2019a. Limitations of pinned auc for measuring un- intended bias. In arXiv preprint arXiv:1903.02088.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Nuanced metrics for measuring unintended bias with real data for text classification",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Borkan",
"suffix": ""
},
{
"first": "Lucas",
"middle": [],
"last": "Dixon",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Sorensen",
"suffix": ""
},
{
"first": "Nithum",
"middle": [],
"last": "Thain",
"suffix": ""
},
{
"first": "Lucy",
"middle": [],
"last": "Vasserman",
"suffix": ""
}
],
"year": 2019,
"venue": "Companion Proceedings of The 2019 World Wide Web Conference",
"volume": "",
"issue": "",
"pages": "491--500",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Borkan, Lucas Dixon, Jeffrey Sorensen, Nithum Thain, and Lucy Vasserman. 2019b. Nuanced met- rics for measuring unintended bias with real data for text classification. In Companion Proceedings of The 2019 World Wide Web Conference, pages 491- 500.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Semisupervised sequence learning",
"authors": [
{
"first": "M",
"middle": [],
"last": "Andrew",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2015,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "3079--3087",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrew M. Dai and Quoc V. Le. 2015. Semi- supervised sequence learning. In Advances in neural information processing systems, page 3079-3087.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "NAACL-HLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understand- ing. In NAACL-HLT.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Measuring and mitigating unintended bias in text classification",
"authors": [
{
"first": "Lucas",
"middle": [],
"last": "Dixon",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Sorensen",
"suffix": ""
},
{
"first": "Nithum",
"middle": [],
"last": "Thain",
"suffix": ""
},
{
"first": "Lucy",
"middle": [],
"last": "Vasserman",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lucas Dixon, John Li, Jeffrey Sorensen, Nithum Thain, and Lucy Vasserman. 2018. Measuring and mitigat- ing unintended bias in text classification. In Pro- ceedings of AAAI/ACM Conference on Artificial In- telligence, Ethics, and Society.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "A structured self-attentive sentence embedding",
"authors": [
{
"first": "Zhouhan",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Minwei",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Cicero",
"middle": [],
"last": "Nogueira",
"suffix": ""
},
{
"first": "Mo",
"middle": [],
"last": "Santos",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Bowen",
"middle": [],
"last": "Xiang",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "3079--3087",
"other_ids": {
"arXiv": [
"arXiv:1703.03130"
]
},
"num": null,
"urls": [],
"raw_text": "Zhouhan Lin, Minwei Feng, Cicero Nogueira dos San- tos, Mo Yu, Bing Xiang, Bowen Zhou, and Yoshua Bengio. 2017. A structured self-attentive sentence embedding. In arXiv preprint arXiv:1703.03130, page 3079-3087.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Hate speech in russia: Overview of the problem and means for counteraction",
"authors": [
{
"first": "Tanya",
"middle": [],
"last": "Lokshina",
"suffix": ""
}
],
"year": 2003,
"venue": "Bulletin: Anthropology, Minorities, Multiculturalism",
"volume": "4",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tanya Lokshina. 2003. Hate speech in russia: Overview of the problem and means for counterac- tion. In Bulletin: Anthropology, Minorities, Multi- culturalism, volume 4.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Unintended bias in misogyny detection",
"authors": [
{
"first": "Debora",
"middle": [],
"last": "Nozza",
"suffix": ""
},
{
"first": "Claudia",
"middle": [],
"last": "Volpetti",
"suffix": ""
},
{
"first": "Elisabetta",
"middle": [],
"last": "Fersini",
"suffix": ""
}
],
"year": 2019,
"venue": "IEEE/WIC/ACM International Conference on Web Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Debora Nozza, Claudia Volpetti, and Elisabetta Fersini. 2019. Unintended bias in misogyny detection. In IEEE/WIC/ACM International Conference on Web Intelligence.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Detecting state of aggression in sentences using cnn",
"authors": [
{
"first": "Rodmonga",
"middle": [],
"last": "Potapova",
"suffix": ""
},
{
"first": "Denis",
"middle": [],
"last": "Gordeev",
"suffix": ""
}
],
"year": 2016,
"venue": "International Conference on Speech and Computer",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rodmonga Potapova and Denis Gordeev. 2016. Detect- ing state of aggression in sentences using cnn. In International Conference on Speech and Computer.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Neural machine translation of rare words with subword units",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXivpreprintarXiv:1508.07909"
]
},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2015. Neural machine translation of rare words with subword units. In arXiv preprint arXiv:1508.07909.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "To the methodology of corpus construction for machine learning: \"taiga\" syntax tree corpus",
"authors": [
{
"first": "Tatiana",
"middle": [],
"last": "Shavrina",
"suffix": ""
},
{
"first": "Olga",
"middle": [],
"last": "Shapovalova",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tatiana Shavrina and Olga Shapovalova. To the methodology of corpus construction for machine learning: \"taiga\" syntax tree corpus\". In Corpora- 2017.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Toxic comments detection in russian",
"authors": [],
"year": 2020,
"venue": "Computational Linguistics and Intellectual Technologies: Proceedings of the International Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sergey Smetanin. 2020. Toxic comments detection in russian. In Computational Linguistics and Intellec- tual Technologies: Proceedings of the International Conference \"Dialogue 2020\".",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Empirical analysis of multi-task learning for reducing identity bias in toxic comment detection",
"authors": [
{
"first": "Ameya",
"middle": [],
"last": "Vaidya",
"suffix": ""
},
{
"first": "Feng",
"middle": [],
"last": "Mai",
"suffix": ""
},
{
"first": "Yue",
"middle": [],
"last": "Ning",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Fourteenth International AAAI Conference on Web and Social Media",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ameya Vaidya, Feng Mai, and Yue Ning. 2020. Empir- ical analysis of multi-task learning for reducing iden- tity bias in toxic comment detection. In Proceedings of the Fourteenth International AAAI Conference on Web and Social Media (ICWSM 2020).",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information pro- cessing systems, pages 5998-6008.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Chris Tar, Yun-Hsuan Sung, Brian Strope, and Ray Kurzweil. 2019. Multilingual universal sentence encoder for semantic retrieval",
"authors": [
{
"first": "Yinfei",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Cer",
"suffix": ""
},
{
"first": "Amin",
"middle": [],
"last": "Ahmad",
"suffix": ""
},
{
"first": "Mandy",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Jax",
"middle": [],
"last": "Law",
"suffix": ""
},
{
"first": "Noah",
"middle": [],
"last": "Constant",
"suffix": ""
},
{
"first": "Gustavo",
"middle": [],
"last": "Hernandez Abrego",
"suffix": ""
},
{
"first": "Steve",
"middle": [],
"last": "Yuan",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXivpreprintarXiv:1907.04307"
]
},
"num": null,
"urls": [],
"raw_text": "Yinfei Yang, Daniel Cer, Amin Ahmad, Mandy Guo, Jax Law, Noah Constant, Gustavo Hernandez Abrego, Steve Yuan, Chris Tar, Yun-Hsuan Sung, Brian Strope, and Ray Kurzweil. 2019. Multilingual universal sentence encoder for semantic retrieval. In arXiv preprint arXiv:1907.04307.",
"links": null
}
},
"ref_entries": {
"TABREF1": {
"html": null,
"content": "<table/>",
"text": "Generalized Mean of Bias AUCs (GMB-AUC) and F1 scores across datasets.",
"num": null,
"type_str": "table"
}
}
}
}