Edit model card

DistilROBERTA fine-tuned for bias detection

This model is based on distilroberta-base pretrained weights, with a classification head fine-tuned to classify text into 2 categories (neutral, biased).

Training data

The dataset used to fine-tune the model is wikirev-bias, extracted from English wikipedia revisions, see https://github.com/rpryzant/neutralizing-bias for details on the WNC wiki edits corpus.

Inputs

Similar to its base model, this model accepts inputs with a maximum length of 512 tokens.

Downloads last month
741
Hosted inference API
Text Classification
Examples
Examples
This model can be loaded on the Inference API on-demand.