|
--- |
|
language: |
|
- en |
|
tags: |
|
- politics |
|
- roberta |
|
license: |
|
- cc-by-nc-sa-4.0 |
|
--- |
|
|
|
## POLITICS |
|
POLITICS, a pretrained model on English news articles of politics, is produced via continued training on RoBERTa, based on a **P**retraining **O**bjective **L**everaging **I**nter-article **T**riplet-loss using **I**deological **C**ontent and **S**tory. |
|
|
|
Details of our proposed training objectives (i.e., Ideology-driven Pretraining Objectives) and experimental results of POLITICS can be found in our NAACL-2022 Findings [paper](https://aclanthology.org/2022.findings-naacl.101.pdf) and GitHub [Repo](https://github.com/launchnlp/POLITICS). |
|
|
|
Together with POLITICS, we also release our curated large-scale dataset (i.e., BIGNEWS) for pretraining, consisting of more than 3.6M political news articles. This asset can be requested [here](https://docs.google.com/forms/d/e/1FAIpQLSf4hft2AHbuak8jHcltVec_2HviaBBVKXPN4OC-CuW4OFORsw/viewform). |
|
|
|
## Citation |
|
Please cite our paper if you use the **POLITICS** model: |
|
``` |
|
@inproceedings{liu-etal-2022-POLITICS, |
|
title = "POLITICS: Pretraining with Same-story Article Comparison for Ideology Prediction and Stance Detection", |
|
author = "Liu, Yujian and |
|
Zhang, Xinliang Frederick and |
|
Wegsman, David and |
|
Beauchamp, Nicholas and |
|
Wang, Lu" |
|
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2022", |
|
year = "2022", |
|
``` |