mbib-base / README.md
horychtom's picture
Update README.md
307039d verified
metadata
license: cc-by-nc-nd-4.0
task_categories:
  - text-classification
language:
  - en
tags:
  - media
  - mediabias
  - media-bias
  - media bias
size_categories:
  - 1M<n<10M
dataset_info:
  config_name: plain_text
  splits:
    - name: cognitive_bias
    - name: fake_news
    - name: gender_bias
    - name: hate_speech
    - name: linguistic_bias
    - name: political_bias
    - name: racial_bias
    - name: text_level_bias
configs:
  - config_name: default
    data_files:
      - split: cognitive_bias
        path: mbib-aggregated/cognitive-bias.csv
      - split: fake_news
        path: mbib-aggregated/fake-news.csv
      - split: gender_bias
        path: mbib-aggregated/gender-bias.csv
      - split: hate_speech
        path: mbib-aggregated/hate-speech.csv
      - split: linguistic_bias
        path: mbib-aggregated/linguistic-bias.csv
      - split: political_bias
        path: mbib-aggregated/political-bias.csv
      - split: racial_bias
        path: mbib-aggregated/racial-bias.csv
      - split: text_level_bias
        path: mbib-aggregated/text-level-bias.csv

Dataset Card for Media-Bias-Identification-Benchmark

Table of Contents

Dataset Description

Baseline

TaskModelMicro F1Macro F1
cognitive-bias ConvBERT/ConvBERT 0.7126 0.7664
fake-news Bart/RoBERTa-T 0.6811 0.7533
gender-bias RoBERTa-T/ELECTRA 0.8334 0.8211
hate-speech RoBERTA-T/Bart 0.8897 0.7310
linguistic-bias ConvBERT/Bart 0.7044 0.4995
political-bias ConvBERT/ConvBERT 0.7041 0.7110
racial-bias ConvBERT/ELECTRA 0.8772 0.6170
text-leve-bias ConvBERT/ConvBERT 0.7697 0.7532

Languages

All datasets are in English

Dataset Structure

Data Instances

cognitive-bias

An example of one training instance looks as follows.

{
  "text": "A defense bill includes language that would require military hospitals to provide abortions on demand",
  "label": 1
}

Data Fields

  • text: a sentence from various sources (eg., news articles, twitter, other social media).
  • label: binary indicator of bias (0 = unbiased, 1 = biased)

Considerations for Using the Data

Social Impact of Dataset

We believe that MBIB offers a new common ground for research in the domain, especially given the rising amount of (research) attention directed toward media bias

Citation Information

@inproceedings{
    title = {Introducing MBIB - the first Media Bias Identification Benchmark Task and Dataset Collection},
    author = {Wessel, Martin and Spinde, Timo and Horych, Tomáš and Ruas, Terry and Aizawa, Akiko and Gipp, Bela},
    year = {2023},
    note = {[in review]}
}