Datasets:

Languages: Danish
Multilinguality: monolingual
Size Categories: 10K<n<100K
Language Creators: found
Annotations Creators: expert-generated
Source Datasets: original
Tags:
Not-For-All-Audiences
License:

Acknowledge ITU clearance agreement for the BAJER Dataset to access the repository

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

To receive a copy of the BAJER Dataset, the Researcher(s) must observe the restrictions listed below. In addition to other possible remedies, failure to observe these restrictions may result in revocation of permission to use the data as well as denial of access to additional material. By accessing this dataset you agrees to the following restrictions on the BAJER Dataset: Purpose. The Dataset will be used for research and/or statistical purposes only. Redistribution The Dataset, in whole or in part, will not be further distributed, published, copied, or disseminated in any way or form whatsoever, whether for profit or not. The Researcher(s) is solely liable for all claims, losses, damages, costs, fees, and expenses resulting from their disclosure of the data. Modification and Commercial Use The Dataset, in whole or in part, will not be modified or used for commercial purposes. The right granted herein is specifically for the internal research purposes of Researcher(s), and Researcher(s) shall not duplicate or use the disclosed Database or its contents either directly or indirectly for commercialization or any other direct for-profit purpose. Storage The Researcher(s) must ensure that the data is stored and processed in a manner that ensures appropriate security of the personal data, including protection against unauthorised or unlawful processing and against accidental loss, destruction or damage, using appropriate technical or organisational measures in accordance with the GDPR. Disclaimers The Database has been developed as part of research conducted at ITU Copenhagen. The Database is experimental in nature and is made available “as is” without obligation by ITU Copenhagen to provide accompanying services or support. The entire risk as to the quality and performance of the Database is with Researcher(s). Governing law and indemnification This agreement is governed by Danish law. To the extent allowed by law, the Researcher(s) shall indemnify and hold harmless ITU against any and all claims, losses, damages, costs, fees, and expenses resulting from Researcher(s) possession and/or use of the Dataset.

Log in or Sign Up to review the conditions and access this dataset content.

Dataset Card for "Bajer"

Dataset Summary

This is a high-quality dataset of annotated posts sampled from social media posts and annotated for misogyny. Danish language.

Online misogyny, a category of online abusive language, has serious and harmful social consequences. Automatic detection of misogynistic language online, while imperative, poses complicated challenges to both data gathering, data annotation, and bias mitigation, as this type of data is linguistically complex and diverse.

See the accompanying ACL paper Annotating Online Misogyny for full details.

Supported Tasks and Leaderboards

*

Languages

Danish (bcp47:da)

Dataset Structure

Data Instances

Bajer

  • Size of downloaded dataset files: 7.29 MiB
  • Size of the generated dataset: 6.57 MiB
  • Total amount of disk used: 13.85 MiB

An example of 'train' looks as follows.

{
  'id': '0', 
  'dataset_id': '0', 
  'label_id': '0', 
  'text': 'Tilfældigt hva, din XXXXXXXXXX 🤬🤬🤬', 
  'sampling': 'keyword_twitter', 
  'subtask_A': 1, 
  'subtask_B': 0, 
  'subtask_C1': 3, 
  'subtask_C2': 6
}

Data Fields

  • id: a string feature, unique identifier in this dataset.
  • dataset_id: a string feature, internal annotation identifier.
  • label_id: a string feature, internal annotation sequence number.
  • text: a string of the text that's annotated.
  • sampling: a string describing which sampling technique surfaced this message
  • subtask_A: is the text abusive ABUS or not NOT? 0: NOT, 1: ABUS
  • subtask_B: for abusive text, what's the target - individual IND, group GRP, other OTH, or untargeted UNT? 0: IND, 1: GRP, 2: OTH, 3: UNT, 4: not applicable
  • subtask_C1: for group-targeted abuse, what's the group - misogynistic SEX, other OTH, or racist RAC? 0: SEX, 1: OTH, 2: RAC, 3: not applicable
  • subtask_C2: for misogyny, is it neosexist NEOSEX, discrediting DISCREDIT, normative stereotyping NOR, benevolent sexism AMBIVALENT, dominance DOMINANCE, or harassment HARASSMENT? 0: NEOSEX, 1: DISCREDIT, 2: NOR, 3: AMBIVALENT, 4: DOMINANCE, 5: HARASSMENT, 6: not applicable

Data Splits

name train
bajer 27880 sentences

Dataset Creation

Curation Rationale

The goal was to collect data for developing an annotation schema of online misogyny.

Random sampling of text often results in scarcity of examples of specifically misogynistic content (e.g. (Wulczyn et al., 2017; Founta et al., 2018)). Therefore, we used the common alternative of collecting data by using predefined keywords with a potentially high search hit (e.g. Waseem and Hovy (2016)), and identifying relevant user-profiles (e.g. (Anzovino et al., 2018)) and related topics (e.g. (Kumar et al., 2018)).

We searched for keywords (specific slurs, hashtags), that are known to occur in sexist posts. These were defined by previous work, a slur list from Reddit, and from interviews and surveys of online misogyny among women. We also searched for broader terms like “sex” or “women”, which do not appear exclusively in a misogynistic context, for example in the topic search, where we gathered relevant posts and their comments from the social media pages of public media. A complete list of keywords can be found in the appendix.

Social media provides a potentially biased, but broad snapshot of online human discourse, with plenty of language and behaviours represented. Following best practice guidelines (Vidgen and Derczynski, 2020), we sampled from a language for which there are no existing annotations of the target phenomenon: Danish.

Different social media platforms attract different user groups and can exhibit domain-specific language (Karan and Snajder ˇ , 2018). Rather than choosing one platform (existing misogyny datasets are primarily based on Twitter and Reddit (Guest et al., 2021)), we sampled from multiple platforms: Statista (2020) shows that the platform where most Danish users are present is Facebook, followed by Twitter, YouTube, Instagram and lastly, Reddit. The dataset was sampled from Twitter, Facebook and Reddit posts as plain text.

Source Data

Initial Data Collection and Normalization

The dataset was sampled from Twitter, Facebook and Reddit posts as plain text. Data was gathered based on: keyword-based search (i.e. purposive sampling); topic-based search; and content from specific users.

Who are the source language producers?

Danish-speaking social media users

Annotations

Annotation process

In annotating our dataset, we built on the MATTER framework (Pustejovsky and Stubbs, 2012) and use the variation presented by Finlayson and Erjavec (2017) (the MALER framework), where the Train & Test stages are replaced by Leveraging of annotations for one’s particular goal, in our case the creation of a comprehensive taxonomy.

We created a set of guidelines for the annotators. The annotators were first asked to read the guidelines and individually annotate about 150 different posts, after which there was a shared discussion. After this pilot round, the volume of samples per annotator was increased and every sample labeled by 2-3 annotators. When instances were ‘flagged’ or annotators disagreed on them, they were discussed during weekly meetings, and misunderstandings were resolved together with the external facilitator. After round three, when reaching 7k annotated posts (Figure 2), we continued with independent annotations maintaining a 15% instance overlap between randomly picked annotator pairs.

Management of annotator disagreement is an important part of the process design. Disagreements can be solved by majority voting (Davidson et al., 2017; Wiegand et al., 2019), labeled as abuse if at least one annotator has labeled it (Golbeck et al., 2017) or by a third objective instance (Gao and Huang, 2017). Most datasets use crowdsourcing platforms or a few academic experts for annotation (Vidgen and Derczynski, 2020). Inter-annotatoragreement (IAA) and classification performance are established as two grounded evaluation measurements for annotation quality (Vidgen and Derczynski, 2020). Comparing the performance of amateur annotators (while providing guidelines) with expert annotators for sexism and racism annotation, Waseem (2016) show that the quality of amateur annotators is competitive with expert annotations when several amateurs agree. Facing the trade-off between training annotators intensely and the number of involved annotators, we continued with the trained annotators and group discussions/ individual revisions for flagged content and disagreements (Section 5.4).

Who are the annotators?

---|--- Gender|6 female, 2 male (8 total) Age:| 5 <30; 3 ≥30 Ethnicity:| 5 Danish: 1 Persian, 1 Arabic, 1 Polish Study/occupation: | Linguistics (2); Health/Software Design; Ethnography/Digital Design; Communication/Psychology; Anthropology/Broadcast Moderator; Ethnography/Climate Change; Film Artist

Personal and Sensitive Information

Usernames and PII were stripped during annotation process by skipping content containing these and eliding it from the final dataset

Considerations for Using the Data

Social Impact of Dataset

The data contains abusive language. It may be possible to identify original speakers based on the content, so the data is only available for research purposes under a restrictive license and conditions. We hope that identifying sexism can help moderators. There is a possibility that the content here could be used to generate misogyny in Danish, which would place women in Denmark in an even more hostile environment, and for this reason data access is restricted and tracked.

Discussion of Biases

We have taken pains to mitigate as many biases as we were aware of in this work.

Selection biases: Selection biases for abusive language can be seen in the sampling of text, for instance when using keyword search (Wiegand et al., 2019), topic dependency (Ousidhoum et al., 2020), users (Wiegand et al., 2019), domain (Wiegand et al., 2019), time (Florio et al., 2020) and lack of linguistic variety (Vidgen and Derczynski, 2020).

Label biases: Label biases can be caused by, for instance, non-representative annotator selection, lack in training/domain expertise, preconceived notions, or pre-held stereotypes. These biases are treated in relation to abusive language datasets by several sources, e.g. general sampling and annotators biases (Waseem, 2016; Al Kuwatly et al., 2020), biases towards minority identity mentions based for example on gender or race (Davidson et al., 2017; Dixon et al., 2018; Park et al., 2018; Davidson et al., 2019), and political annotator biases (Wich et al., 2020). Other qualitative biases comprise, for instance, demographic bias, over-generalization, topic exposure as social biases (Hovy and Spruit, 2016).

We applied several measures to mitigate biases occurring through the annotation design and execution: First, we selected labels grounded in existing, peer-reviewed research from more than one field. Second, we aimed for diversity in annotator profiles in terms of age, gender, dialect, and background. Third, we recruited a facilitator with a background in ethnographic studies and provided intense annotator training. Fourth, we engaged in weekly group discussions, iteratively improving the codebook and integrating edge cases. Fifth, the selection of platforms from which we sampled data is based on local user representation in Denmark, rather than convenience. Sixth, diverse sampling methods for data collection reduced selection biases.

Other Known Limitations

The data is absolutely NOT a reasonable or in any way stratified sample of social media text, so class prevalence/balance here says nothing about incidences of these phenomena in the wild. That said, we hypothesis that the distribution of types of misogyny in this data (subtask C2) is roughly representative of how misogyny presents on the studied platforms.

Additional Information

Dataset Curators

The dataset is curated by the paper's authors and the ethnographer-led annotation team.

Licensing Information

The data is licensed under a restrictive usage agreement. Apply for access here

Citation Information

@inproceedings{zeinert-etal-2021-annotating,
    title = "Annotating Online Misogyny",
    author = "Zeinert, Philine  and
      Inie, Nanna  and
      Derczynski, Leon",
    booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
    month = aug,
    year = "2021",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2021.acl-long.247",
    doi = "10.18653/v1/2021.acl-long.247",
    pages = "3181--3197",
}

Contributions

Author-added dataset @leondz

Downloads last month
0
Edit dataset card