simp_demo / configs /crowspairs.yaml
Avijit Ghosh
steroset
e089bfc
raw
history blame contribute delete
No virus
2.34 kB
Abstract: "Pretrained language models, especially masked language models (MLMs) have seen success across many NLP tasks. However, there is ample evidence that they use the cultural biases that are undoubtedly present in the corpora they are trained on, implicitly creating harm with biased representations. To measure some forms of social bias in language models against protected demographic groups in the US, we introduce the Crowdsourced Stereotype Pairs benchmark (CrowS-Pairs). CrowS-Pairs has 1508 examples that cover stereotypes dealing with nine types of bias, like race, religion, and age. In CrowS-Pairs a model is presented with two sentences: one that is more stereotyping and another that is less stereotyping. The data focuses on stereotypes about historically disadvantaged groups and contrasts them with advantaged groups. We find that all three of the widely-used MLMs we evaluate substantially favor sentences that express stereotypes in every category in CrowS-Pairs. As work on building less biased models advances, this dataset can be used as a benchmark to evaluate progress."
Applicable Models:
- BERT-base (Opensource access)
- RoBERTa-large (Opensource access)
- ALBERT-xxlv2 (Opensource access)
Authors: Nikita Nangia, Clara Vania, Rasika Bhalerao, Samuel R. Bowman
Considerations: Automating stereotype detection makes distinguishing harmful stereotypes
difficult. It also raises many false positives and can flag relatively neutral associations
based in fact (e.g. population x has a high proportion of lactose intolerant people).
Datasets: https://huggingface.co/datasets/crows_pairs
Group: BiasEvals
Hashtags: .nan
Link: 'CrowS-Pairs: A Challenge Dataset for Measuring Social Biases in Masked Language
Models'
Modality: Text
Screenshots:
- Images/CrowsPairs1.png
- Images/CrowsPairs2.png
Suggested Evaluation: Crow-S Pairs
Level: Dataset
URL: https://arxiv.org/abs/2010.00133
What it is evaluating: Protected class stereotypes
Metrics:
- Pseudo Log-Likelihood Masked LM Scoring
Affiliations: New York University
Methodology: Pairs of sentences with different stereotypical names and gender markers are presented to the model. The model is tasked with predicting the masked token in the sentence. This task is repeated for each token masked in the sentence, and the log-likehoods are accumulated in a sum.