Datasets:
Commit
·
7dedfa2
1
Parent(s):
950da48
Update readme
Browse files
README.md
CHANGED
@@ -4,4 +4,47 @@ task_categories:
|
|
4 |
- text-classification
|
5 |
language:
|
6 |
- en
|
7 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
4 |
- text-classification
|
5 |
language:
|
6 |
- en
|
7 |
+
---
|
8 |
+
|
9 |
+
# Bias in Bios
|
10 |
+
|
11 |
+
Bias in Bios was created by (De-Artega et al., 2019) and published under the MIT license (https://github.com/microsoft/biosbias). The dataset is used to investigate bias in NLP models. It consists of textual biographies used to predict professional occupations, the sensitive attribute is the gender (binary).
|
12 |
+
|
13 |
+
The version shared here is the version proposed by (Ravgofel et al., 2020) which slightly smaller due to the unavailability of 5,557 biographies.
|
14 |
+
|
15 |
+
The dataset is divided between train (257,000 samples), test (99,000 samples) and dev (40,000 samples) sets.
|
16 |
+
|
17 |
+
Below are presented the classifiaction and sensitive attribtues labels and their proportion. Distributions are similar through the three sets.
|
18 |
+
|
19 |
+
#### Classification labels
|
20 |
+
|
21 |
+
| Profession | Numerical label | Proportion (%)| | Profession | Numerical label | Proportion (%)|
|
22 |
+
|---|---|---|---|---|---|---|
|
23 |
+
accountant | 0 | 1.42 | | nurse | 13 | 4.78
|
24 |
+
architect | 1 | 2.55 | | painter | 14 | 1.95
|
25 |
+
attorney | 2 | 8.22 | | paralegal | 15 | 0.45
|
26 |
+
chiropractor | 3 | 0.67 | | pastor | 16 | 0.64
|
27 |
+
comedian | 4 | 0.71 | | personal_trainer | 17 | 0.36
|
28 |
+
composer | 5 | 1.41 | | photographer | 18 | 6.13
|
29 |
+
dentist | 6 | 3.68 | | physician | 19 | 10.35
|
30 |
+
dietitian | 7 | 1.0 | | poet | 20 | 1.77
|
31 |
+
dj | 8 | 0.38 | | professor | 21 | 29.8
|
32 |
+
filmmaker | 9 | 1.77 | | psychologist | 22 | 4.64
|
33 |
+
interior_designer | 10 | 0.37 | | rapper | 23 | 0.35
|
34 |
+
journalist | 11 | 5.03 | | software_engineer | 24 | 1.74
|
35 |
+
model | 12 | 1.89 | | surgeon | 25 | 3.43
|
36 |
+
nurse | 13 | 4.78 | | teacher | 26 | 4.09
|
37 |
+
painter | 14 | 1.95 | | yoga_teacher | 27 | 0.42
|
38 |
+
|
39 |
+
#### Sensitive attributes
|
40 |
+
|
41 |
+
| Gender | Numerical label | Proportion (%)|
|
42 |
+
|---|---|---|
|
43 |
+
Male | 0 | 53.9 |
|
44 |
+
Female | 1 | 46.1
|
45 |
+
|
46 |
+
|
47 |
+
---
|
48 |
+
(De-Artega et al., 2019) Maria De-Arteaga, Alexey Romanov, Hanna Wallach, Jennifer Chayes, Christian Borgs, Alexandra Chouldechova, Sahin Geyik, Krishnaram Kenthapadi, and Adam Tauman Kalai. 2019. Bias in Bios: A Case Study of Semantic Representation Bias in a High-Stakes Setting. In Proceedings of the Conference on Fairness, Accountability, and Transparency (FAT* '19). Association for Computing Machinery, New York, NY, USA, 120–128. https://doi.org/10.1145/3287560.3287572
|
49 |
+
|
50 |
+
(Ravgofel et al., 2020) Shauli Ravfogel, Yanai Elazar, Hila Gonen, Michael Twiton, and Yoav Goldberg. 2020. Null It Out: Guarding Protected Attributes by Iterative Nullspace Projection. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7237–7256, Online. Association for Computational Linguistics.
|