language:
- en
license: cc-by-nc-sa-4.0
size_categories:
- 1K<n<10K
task_categories:
- text-classification
pretty_name: medical-bios
tags:
- medical
dataset_info:
config_name: standard
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': psychologist
'1': surgeon
'2': nurse
'3': dentist
'4': physician
splits:
- name: train
num_bytes: 3103127
num_examples: 8000
- name: test
num_bytes: 386285
num_examples: 1000
- name: validation
num_bytes: 382348
num_examples: 1000
download_size: 2172066
dataset_size: 3871760
configs:
- config_name: standard
data_files:
- split: train
path: standard/train-*
- split: test
path: standard/test-*
- split: validation
path: standard/validation-*
Dataset Description
The dataset comprises English biographies labeled with occupations and binary genders. This is an occupation classification task, where bias concerning gender can be studied. It includes a subset of 10,000 biographies (8k train/1k dev/1k test) targeting 5 medical occupations (psychologist, surgeon, nurse, dentist, physician), derived from De-Arteaga et al. (2019). We collect and release human rationale annotations for a subset of 100 biographies in two different settings: non-contrastive and contrastive. In the former, the annotators were asked to find the rationale for the question: "Why is the person in the following short bio described as a L?", where L is the gold label occupation, e.g., nurse. In the latter, the question was "Why is the person in the following short bio described as an L rather than an F", where F (foil) is another medical occupation, e.g., physician.
You can read more details on the dataset and the annotation process in the paper Eberle et al. (2023).
Dataset Structure
We provide the standard
version of the dataset, where examples look as follows.
{
"text": "He has been a practicing Dentist for 20 years. He has done BDS. He is currently associated with Sree Sai Dental Clinic in Sowkhya Ayurveda Speciality Clinic, Chennai. ... ",
"label": 3,
}
and the newly curated subset of examples including human rationales, dubbed `rationales', where examples look as follows.
{
"text": "'She is currently practising at Dr Ravindra Ratolikar Dental Clinic in Narayanguda, Hyderabad.",
"label": 3,
"foil": 2,
"words": ['She', 'is', 'currently', 'practising', 'at', 'Dr', 'Ravindra', 'Ratolikar', 'Dental', 'Clinic', 'in', 'Narayanguda', ',', 'Hyderabad', '.']
"rationales": [0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
"contrastive_rationales": [0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
"annotations": [[0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], ...]
"contrastive_annotations": [[0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], ...]
}
Use
To load the standard
version of the dataset:
from datasets import load_dataset
dataset = load_dataset("coastalcph/medical-bios", "standard")
To load the newly curated subset of examples with human rationales:
from datasets import load_dataset
dataset = load_dataset("coastalcph/medical-bios", "rationales")
Citation
@inproceedings{eberle-etal-2023-rather,
title = "Rather a Nurse than a Physician - Contrastive Explanations under Investigation",
author = "Eberle, Oliver and
Chalkidis, Ilias and
Cabello, Laura and
Brandl, Stephanie",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.427",
}