POLITICS / README.md
frederickxzhang's picture
Update README.md
fdbe921
|
raw
history blame
No virus
1.4 kB
metadata
language:
  - en
tags:
  - politics
  - roberta
license:
  - cc-by-nc-sa-4.0

POLITICS

POLITICS, a pretrained model on English news articles of politics, is produced via continued training on RoBERTa, based on a Pretraining Objective Leveraging Inter-article Triplet-loss using Ideological Content and Story.

Details of our proposed training objectives (i.e., Ideology-driven Pretraining Objectives) and experimental results of POLITICS can be found in our NAACL-2022 Findings paper and GitHub Repo.

Together with POLITICS, we also release our curated large-scale dataset (i.e., BIGNEWS) for pretraining, consisting of more than 3.6M political news articles. This asset can be requested here.

Citation

Please cite our paper if you use the POLITICS model:

@inproceedings{liu-etal-2022-POLITICS,
    title = "POLITICS: Pretraining with Same-story Article Comparison for Ideology Prediction and Stance Detection",
    author = "Liu, Yujian and
    Zhang, Xinliang Frederick and
    Wegsman, David and
    Beauchamp, Nicholas and 
    Wang, Lu"
    booktitle = "Findings of the Association for Computational Linguistics: NAACL 2022",
    year = "2022",