bethard commited on
Commit
ac03c77
1 Parent(s): 52fe272

first draft of model card

Browse files
Files changed (1) hide show
  1. README.md +102 -0
README.md CHANGED
@@ -1,3 +1,105 @@
1
  ---
 
 
2
  license: apache-2.0
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language:
3
+ - en
4
  license: apache-2.0
5
+ library_name: transformers
6
+ tags:
7
+ - incivility
8
+ metrics:
9
+ - f1
10
  ---
11
+
12
+ # Model Card for roberta-base-namecalling
13
+
14
+ This is a [roBERTa-base](https://huggingface.co/roberta-base) model fine-tuned on ~12K social media posts annotated for the presence or absence of namecalling.
15
+
16
+ # How to Get Started with the Model
17
+
18
+ You can use this model directly with a pipeline for text classification:
19
+
20
+ ```python
21
+ >>> import transformers
22
+ >>> model_name = "roberta-base-namecalling"
23
+ >>> classifier = transformers.TextClassificationPipeline(
24
+ ... tokenizer=transformers.AutoTokenizer.from_pretrained(model_name),
25
+ ... model=transformers.AutoModelForSequenceClassification.from_pretrained(model_name))
26
+ >>> classifier("Be careful around those Democrats.")
27
+ [{'label': 'not-namecalling', 'score': 0.9995089769363403}]
28
+ >>> classifier("Be careful around those DemocRats.")
29
+ [{'label': 'namecalling', 'score': 0.996940016746521}]
30
+ ```
31
+
32
+ # Model Details
33
+
34
+ This is a 2023 update of the model built by [Ozler et al. (2020)](https://aclanthology.org/2020.alw-1.4/) incorporating data from [Rains et al. (2021)](https://doi.org/10.1093/hcr/hqab009) and using a more recent version of the transformers library.
35
+
36
+ - **Developed by:**
37
+ [Steven Bethard](https://bethard.github.io/),
38
+ [Kate Kenski](https://comm.arizona.edu/user/kate-kenski),
39
+ [Steve Rains](https://comm.arizona.edu/user/steve-rains),
40
+ [Yotam Shmargad](https://www.yotamshmargad.com/),
41
+ [Kevin Coe](https://faculty.utah.edu/u0915886-Kevin_Coe/)
42
+ - **Language:** en
43
+ - **License:** apache-2.0
44
+ - **Parent Model:** roberta-base
45
+ - **Resources for more information:**
46
+ - [GitHub Repo](https://github.com/clulab/incivility)
47
+ - Kadir Bulut Ozler; Kate Kenski; Steve Rains; Yotam Shmargad; Kevin Coe; and Steven Bethard. [Fine-tuning for multi-domain and multi-label uncivil language detection](https://aclanthology.org/2020.alw-1.4/). In Proceedings of the Fourth Workshop on Online Abuse and Harms, pages 28–33, Online, November 2020. Association for Computational Linguistics
48
+ - Stephen A Rains; Yotam Shmargad; Kevin Coe; Kate Kenski; and Steven Bethard. [Assessing the Russian Troll Efforts to Sow Discord on Twitter during the 2016 U.S. Election](https://doi.org/10.1093/hcr/hqab009). Human Communication Research, 47(4): 477-486. 08 2021.
49
+ - Stephen A Rains; Jake Harwood; Yotam Shmargad; Kate Kenski; Kevin Coe; and Steven Bethard. [Engagement with partisan Russian troll tweets during the 2016 U.S. presidential election: a social identity perspective](https://doi.org/10.1093/joc/jqac037). Journal of Communication, 73(1): 38-48. 02 2023.
50
+
51
+ # Uses
52
+
53
+ The model is intended to be used for text classification, taking as input social media posts and predicting as output whether the post contains namecalling.
54
+
55
+ It is not intended to generate namecalling, and it should not be used as part of any incivility generation model.
56
+
57
+ # Training Details
58
+
59
+ The model was trained on data from four sources: comments on the Arizona Daily Star website from 2011, Russian troll Tweets from 2012-2018, Tucson politician Tweets from 2018, and US presidential primary Tweets from 2019.
60
+ Each dataset was annotated for the presence of namecalling following the approach of [Coe et al. (2014)](https://doi.org/10.1111/jcom.12104) and split into training, development, and test partitions.
61
+
62
+ The [roberta-base](https://huggingface.co/roberta-base) model was fine-tuned on the combined training partitions from all four datasets, with texts tokenized using the standard [roberta-base](https://huggingface.co/roberta-base) tokenizer.
63
+
64
+ # Evaluation
65
+
66
+ The model was evaluated on the test partition of each of the datasets. It achieves the following F1 scores:
67
+
68
+ - 0.58 F1 on Arizona Daily Star comments
69
+ - 0.71 F1 on Russian troll Tweets
70
+ - 0.71 F1 on Tucson politician Tweets
71
+ - 0.81 F1 on US presidential primary Tweets
72
+
73
+ # Limitations and Biases
74
+
75
+ The human coders and their trainers were mostly [Western, educated, industrialized, rich and democratic (WEIRD)](https://www.nature.com/articles/466029a), which may have shaped how they evaluated incivility.
76
+ The trained models will reflect such biases.
77
+
78
+ # Environmental Impact
79
+
80
+ - **Hardware Type:** Tesla V100S-PCIE-32GB
81
+ - **Hours used:** 22
82
+ - **HPC Provider:** <https://hpc.arizona.edu/>
83
+ - **Carbon Emitted:** 2.85 kg CO2 (estimated by [ML CO2 Impact](https://mlco2.github.io/impact#compute))
84
+
85
+ # Citation
86
+
87
+ ```bibtex
88
+ @inproceedings{ozler-etal-2020-fine,
89
+ title = "Fine-tuning for multi-domain and multi-label uncivil language detection",
90
+ author = "Ozler, Kadir Bulut and
91
+ Kenski, Kate and
92
+ Rains, Steve and
93
+ Shmargad, Yotam and
94
+ Coe, Kevin and
95
+ Bethard, Steven",
96
+ booktitle = "Proceedings of the Fourth Workshop on Online Abuse and Harms",
97
+ month = nov,
98
+ year = "2020",
99
+ address = "Online",
100
+ publisher = "Association for Computational Linguistics",
101
+ url = "https://aclanthology.org/2020.alw-1.4",
102
+ doi = "10.18653/v1/2020.alw-1.4",
103
+ pages = "28--33",
104
+ }
105
+ ```