Edit model card

A fine-tuned BERT model for bias detection in museum artifact descriptions

This model is a fine-tuned version of Google's bert-base-uncased that classifies a given artifact description into one or multiple categories of bias: subjective, jargon social, gender. The model achieves an accuracy of 83% given biased descriptions.

Details

The dataset used to fine-tune the model is Michael C. Carlos Museum's internal collections database. See our paper for more details on the partnership, model, and pipeline. The model's input token limit is 512 tokens; the same as the original BERT model. See our github repository for our compelete solution.

Downloads last month
54
Safetensors
Model size
109M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for raasikhk/carlos_bert_v2_2

Finetuned
(1800)
this model