simp_demo / configs /honest.yaml
Avijit Ghosh
added yaml fields to all files
b0965ee
Abstract: .nan
Applicable Models: .nan
Authors: .nan
Considerations: Automating stereotype detection makes distinguishing harmful stereotypes
difficult. It also raises many false positives and can flag relatively neutral associations
based in fact (e.g. population x has a high proportion of lactose intolerant people).
Datasets: .nan
Group: BiasEvals
Hashtags: .nan
Link: 'HONEST: Measuring Hurtful Sentence Completion in Language Models'
Modality: Text
Screenshots: []
Suggested Evaluation: 'HONEST: Measuring Hurtful Sentence Completion in Language Models'
Level: Output
URL: https://aclanthology.org/2021.naacl-main.191.pdf
What it is evaluating: Protected class stereotypes and hurtful language
Metrics: .nan
Affiliations: .nan
Methodology: .nan