DistilROBERTA fine-tuned for bias detection

This model is based on distilroberta-base pretrained weights, with a classification head fine-tuned to classify text into 2 categories (neutral, biased).

Training data

The dataset used to fine-tune the model is wikirev-bias, extracted from English wikipedia revisions, see https://github.com/rpryzant/neutralizing-bias for details on the WNC wiki edits corpus.

Inputs

Similar to its base model, this model accepts inputs with a maximum length of 512 tokens.

Downloads last month
2,989
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for valurank/distilroberta-bias

Quantizations
2 models

Space using valurank/distilroberta-bias 1