You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Warning: this repository contains harmful content (abusive language, hate speech, stereotypes).

Log in or Sign Up to review the conditions and access this dataset content.

Dataset Card for "Bajer"

THIS PUBLIC-FACING DATASET IS A PREVIEW ONLY

This is a working data reader but the data here is just a preview of the full dataset, for safety & legal reasons.

To apply to access the entire dataset, complete this form.

When you have the full data, amend _URL in bajer.py to point to the full data TSV's filename.

Dataset Summary

This is a high-quality dataset of annotated posts sampled from social media posts and annotated for misogyny. Danish language.

See the accompanying ACL paper Annotating Online Misogyny for full details.

Supported Tasks and Leaderboards

Languages

Danish (bcp47:da)

Dataset Structure

Data Instances

Bajer

In this preview: 10 instances

In the full dataset:

  • Size of downloaded dataset files: 7.29 MiB
  • Size of the generated dataset: 6.57 MiB
  • Total amount of disk used: 13.85 MiB

See above (or below) for how to get the full dataset.

An example of 'train' looks as follows.

{
  'id': '0', 
  'dataset_id': '0', 
  'label_id': '0', 
  'text': 'Tilfældigt hva, din XXXXXXXXXX 🤬🤬🤬', 
  'sampling': 'keyword_twitter', 
  'subtask_A': 1, 
  'subtask_B': 0, 
  'subtask_C1': 3, 
  'subtask_C2': 6
}

Data Fields

  • id: a string feature, unique identifier in this dataset.
  • dataset_id: a string feature, internal annotation identifier.
  • label_id: a string feature, internal annotation sequence number.
  • text: a string of the text that's annotated.
  • sampling: a string describing which sampling technique surfaced this message
  • subtask_A: is the text abusive ABUS or not NOT? 0: NOT, 1: ABUS
  • subtask_B: for abusive text, what's the target - individual IND, group GRP, other OTH, or untargeted UNT? 0: IND, 1: GRP, 2: OTH, 3: UNT, 4: not applicable
  • subtask_C1: for group-targeted abuse, what's the group - misogynistic SEX, other OTH, or racist RAC? 0: SEX, 1: OTH, 2: RAC, 3: not applicable
  • subtask_C2: for misogyny, is it neosexist NEOSEX, discrediting DISCREDIT, normative stereotyping NOR, benevolent sexism AMBIVALENT, dominance DOMINANCE, or harassment HARASSMENT? 0: NEOSEX, 1: DISCREDIT, 2: NOR, 3: AMBIVALENT, 4: DOMINANCE, 5: HARASSMENT, 6: not applicable

Data Splits

In the full dataset:

name train
bajer 27880 sentences

This preview has only 10 sentences - the link for access to the full data is given at the top of this page.

Dataset Creation

Curation Rationale

The goal was to collect data for developing an annotation schema of online misogyny.

Random sampling of text often results in scarcity of examples of specifically misogynistic content (e.g. (Wulczyn et al., 2017; Founta et al., 2018)). Therefore, we used the common alternative of collecting data by using predefined keywords with a potentially high search hit (e.g. Waseem and Hovy (2016)), and identifying relevant user-profiles (e.g. (Anzovino et al., 2018)) and related topics (e.g. (Kumar et al., 2018)).

We searched for keywords (specific slurs, hashtags), that are known to occur in sexist posts. These were defined by previous work, a slur list from Reddit, and from interviews and surveys of online misogyny among women. We also searched for broader terms like “sex” or “women”, which do not appear exclusively in a misogynistic context, for example in the topic search, where we gathered relevant posts and their comments from the social media pages of public media. A complete list of keywords can be found in the appendix.

Social media provides a potentially biased, but broad snapshot of online human discourse, with plenty of language and behaviours represented. Following best practice guidelines (Vidgen and Derczynski, 2020), we sampled from a language for which there are no existing annotations of the target phenomenon: Danish.

Different social media platforms attract different user groups and can exhibit domain-specific language (Karan and Snajder ˇ , 2018). Rather than choosing one platform (existing misogyny datasets are primarily based on Twitter and Reddit (Guest et al., 2021)), we sampled from multiple platforms: Statista (2020) shows that the platform where most Danish users are present is Facebook, followed by Twitter, YouTube, Instagram and lastly, Reddit. The dataset was sampled from Twitter, Facebook and Reddit posts as plain text.

Source Data

Initial Data Collection and Normalization

The dataset was sampled from Twitter, Facebook and Reddit posts as plain text. Data was gathered based on: keyword-based search (i.e. purposive sampling); topic-based search; and content from specific users.

Who are the source language producers?

Danish-speaking social media users

Annotations

Annotation process

In annotating our dataset, we built on the MATTER framework (Pustejovsky and Stubbs, 2012) and use the variation presented by Finlayson and Erjavec (2017) (the MALER framework), where the Train & Test stages are replaced by Leveraging of annotations for one’s particular goal, in our case the creation of a comprehensive taxonomy.

We created a set of guidelines for the annotators. The annotators were first asked to read the guidelines and individually annotate about 150 different posts, after which there was a shared discussion. After this pilot round, the volume of samples per annotator was increased and every sample labeled by 2-3 annotators. When instances were ‘flagged’ or annotators disagreed on them, they were discussed during weekly meetings, and misunderstandings were resolved together with the external facilitator. After round three, when reaching 7k annotated posts (Figure 2), we continued with independent annotations maintaining a 15% instance overlap between randomly picked annotator pairs.

Management of annotator disagreement is an important part of the process design. Disagreements can be solved by majority voting (Davidson et al., 2017; Wiegand et al., 2019), labeled as abuse if at least one annotator has labeled it (Golbeck et al., 2017) or by a third objective instance (Gao and Huang, 2017). Most datasets use crowdsourcing platforms or a few academic experts for annotation (Vidgen and Derczynski, 2020). Inter-annotatoragreement (IAA) and classification performance are established as two grounded evaluation measurements for annotation quality (Vidgen and Derczynski, 2020). Comparing the performance of amateur annotators (while providing guidelines) with expert annotators for sexism and racism annotation, Waseem (2016) show that the quality of amateur annotators is competitive with expert annotations when several amateurs agree. Facing the trade-off between training annotators intensely and the number of involved annotators, we continued with the trained annotators and group discussions/ individual revisions for flagged content and disagreements (Section 5.4).

Who are the annotators?

Demographic category Value
Gender 6 female, 2 male (8 total)
Age: 5 <30; 3 ≥30
Ethnicity: 5 Danish: 1 Persian, 1 Arabic, 1 Polish
Study/occupation: Linguistics (2); Health/Software Design; Ethnography/Digital Design; Communication/Psychology; Anthropology/Broadcast Moderator; Ethnography/Climate Change; Film Artist

Personal and Sensitive Information

Usernames and PII were stripped during annotation process by: skipping content containing these; and eliding it from the final dataset.

Considerations for Using the Data

Social Impact of Dataset

The data contains abusive language. It may be possible to identify original speakers based on the content, so the data is only available for research purposes under a restrictive license and conditions. We hope that identifying sexism can help moderators. There is a possibility that the content here could be used to generate misogyny in Danish, which would place women in Denmark in an even more hostile environment, and for this reason data access is restricted and tracked.

Discussion of Biases

We have taken pains to mitigate as many biases as we were aware of in this work.

Selection biases: Selection biases for abusive language can be seen in the sampling of text, for instance when using keyword search (Wiegand et al., 2019), topic dependency (Ousidhoum et al., 2020), users (Wiegand et al., 2019), domain (Wiegand et al., 2019), time (Florio et al., 2020) and lack of linguistic variety (Vidgen and Derczynski, 2020).

Label biases: Label biases can be caused by, for instance, non-representative annotator selection, lack in training/domain expertise, preconceived notions, or pre-held stereotypes. These biases are treated in relation to abusive language datasets by several sources, e.g. general sampling and annotators biases (Waseem, 2016; Al Kuwatly et al., 2020), biases towards minority identity mentions based for example on gender or race (Davidson et al., 2017; Dixon et al., 2018; Park et al., 2018; Davidson et al., 2019), and political annotator biases (Wich et al., 2020). Other qualitative biases comprise, for instance, demographic bias, over-generalization, topic exposure as social biases (Hovy and Spruit, 2016).

We applied several measures to mitigate biases occurring through the annotation design and execution: First, we selected labels grounded in existing, peer-reviewed research from more than one field. Second, we aimed for diversity in annotator profiles in terms of age, gender, dialect, and background. Third, we recruited a facilitator with a background in ethnographic studies and provided intense annotator training. Fourth, we engaged in weekly group discussions, iteratively improving the codebook and integrating edge cases. Fifth, the selection of platforms from which we sampled data is based on local user representation in Denmark, rather than convenience. Sixth, diverse sampling methods for data collection reduced selection biases.

Other Known Limitations

The data is absolutely NOT a reasonable or in any way stratified sample of social media text, so class prevalence/balance here says nothing about incidences of these phenomena in the wild. That said, we hypothesis that the distribution of types of misogyny in this data (subtask C2) is roughly representative of how misogyny presents on the studied platforms.

Additional Information

Dataset Curators

The dataset is curated by the paper's authors and the ethnographer-led annotation team.

Licensing Information

The data is licensed under a restrictive usage agreement. Apply for access here

Citation Information

@inproceedings{zeinert-etal-2021-annotating,
    title = "Annotating Online Misogyny",
    author = "Zeinert, Philine  and
      Inie, Nanna  and
      Derczynski, Leon",
    booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
    month = aug,
    year = "2021",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2021.acl-long.247",
    doi = "10.18653/v1/2021.acl-long.247",
    pages = "3181--3197",
}

Contributions

Author-added dataset @leondz

Downloads last month
32