frederickxzhang commited on
Commit
c367dbf
1 Parent(s): 73e031c

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +20 -0
README.md ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## POLITICS
2
+ POLITICS, a pretrained model on English news articles of politics, is produced via continued training on RoBERTa, based on a **P**retraining **O**bjective **L**everaging **I**nter-article **T**riplet-loss using **I**deological **C**ontent and **S**tory.
3
+
4
+ Details of our proposed training objectives (i.e., Ideology-driven Pretraining Objectives) and experimental results of POLITICS can be found in our NAACL-2022 Findings [paper](https://arxiv.org/pdf/2205.00619.pdf) and GitHub [Repo](https://github.com/launchnlp/POLITICS).
5
+
6
+ Together with POLITICS, we also release our curated large-scale dataset (i.e., BIGNEWS) for pretraining, consisting of more than 3.6M political news articles. This asset can be requested [here](https://docs.google.com/forms/d/e/1FAIpQLSf4hft2AHbuak8jHcltVec_2HviaBBVKXPN4OC-CuW4OFORsw/viewform).
7
+
8
+ ## Citation
9
+ Please cite our paper if you use the **POLITICS** model:
10
+ ```
11
+ @inproceedings{liu-etal-2022-POLITICS,
12
+ title = "POLITICS: Pretraining with Same-story Article Comparison for Ideology Prediction and Stance Detection",
13
+ author = "Liu, Yujian and
14
+ Zhang, Xinliang Frederick and
15
+ Wegsman, David and
16
+ Beauchamp, Nicholas and
17
+ Wang, Lu"
18
+ booktitle = "Findings of the Association for Computational Linguistics: NAACL 2022",
19
+ year = "2022",
20
+ ```