Edit model card

Climate obstructive narratives classification model based on RoBERTa-large

This model is a fine-tuned version of RoBERTa-large on an climate obstructive narratives dataset. Method, data, and fine-tuning details can be found in Github.

Citation:

@inproceedings{rowlands-etal-2024-predicting,
    title = "Predicting Narratives of Climate Obstruction in Social Media Advertising",
    author = "Rowlands, Harri  and
      Morio, Gaku  and
      Tanner, Dylan  and
      Manning, Christopher",
    booktitle = "Findings of the Association for Computational Linguistics: ACL 2024",
    month = aug,
    year = "2024",
    address = "Bangkok, Thailand",
    publisher = "Association for Computational Linguistics",
}

Model description

The model can be used to classify text of Facebook ads of fossile fuel entities. The task is multi-label classification and the following is the list of the labels:

  • CA: Emphasizes how the oil and gas sector contributes to local and national economies through tax revenues, charitable efforts, and support for local businesses.
  • CB: Focuses on the creation and sustainability of jobs by the oil and gas industry.
  • GA: Highlights efforts to reduce greenhouse gas emissions through internal targets, policy support, voluntary initiatives, and emissions reduction technologies.
  • GC: Promotes "clean" or "green" fossil fuels as part of climate solutions.
  • PA: Portrays oil and gas as essential, reliable, affordable, and safe energy sources critical for maintaining power systems.
  • PB: Emphasizes the importance of oil and gas as raw materials for various non-power-related uses and manufactured goods.
  • SA: Stresses how domestic oil and gas production benefits the nation, including energy independence, energy leadership, and the idea of supporting American energy.

Intended uses & limitations

We intend that this model is used to reproduce the result (and thus research purpose.)

Training and evaluation data

The training dataset was deribed from Holder et al. 2023.

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 0
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 3.0
  • mixed_precision_training: Native AMP

Framework versions

  • Transformers 4.28.1
  • Pytorch 2.0.0+cu117
  • Datasets 2.14.5
  • Tokenizers 0.13.3
Downloads last month
13
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.