Marissa commited on
Commit
3b0d541
1 Parent(s): 75d6aaf

Add model card

Browse files

This PR has a preliminary model card, open to any feedback! cc

@Ezi



@Meg



@Nazneen

Files changed (1) hide show
  1. README.md +143 -0
README.md ADDED
@@ -0,0 +1,143 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: en
3
+ tags:
4
+ - exbert
5
+ datasets:
6
+ - bookcorpus
7
+ - wikipedia
8
+ ---
9
+
10
+ # RoBERTa Base OpenAI Detector
11
+
12
+ ## Table of Contents
13
+ - [Model Details](#model-details)
14
+ - [How To Get Started With the Model](#how-to-get-started-with-the-model)
15
+ - [Uses](#uses)
16
+ - [Risks, Limitations and Biases](#risks-limitations-and-biases)
17
+ - [Training](#training)
18
+ - [Evaluation](#evaluation)
19
+ - [Environmental Impact](#environmental-impact)
20
+ - [Technical Specifications](#technical-specifications)
21
+ - [Citation Information](#citation-information)
22
+ - [Model Card Authors](#model-card-author)
23
+
24
+ ## Model Details
25
+
26
+ **Model Description:** RoBERTa base OpenAI Detector is the GPT-2 output detector model, obtained by fine-tuning a RoBERTa base model with the outputs of the 1.5B-parameter GPT-2 model. The model can be used to predict if text was generated by a GPT-2 model. This model was released by OpenAI at the same time as OpenAI released the weights of the [largest GPT-2 model](https://huggingface.co/gpt2-xl), the 1.5B parameter version.
27
+
28
+ - **Developed by:** OpenAI, see [GitHub Repo](https://github.com/openai/gpt-2-output-dataset/tree/master/detector) and [associated paper](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf) for full author list
29
+ - **Model Type:** Fine-tuned transformer-based language model
30
+ - **Language(s):** English
31
+ - **License:** Unknown
32
+ - **Related Models:** [RoBERTa base](https://huggingface.co/roberta-base), [GPT-XL (1.5B parameter version)](https://huggingface.co/gpt2-xl), [GPT-Large (the 774M parameter version)](https://huggingface.co/gpt2-large), [GPT-Medium (the 355M parameter version)](https://huggingface.co/gpt2-medium) and [GPT-2 (the 124M parameter version)](https://huggingface.co/gpt2)
33
+ - **Resources for more information:**
34
+ - [Research Paper](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf) (see, in particular, the section beginning on page 12 about Automated ML-based detection).
35
+ - [GitHub Repo](https://github.com/openai/gpt-2-output-dataset/tree/master/detector)
36
+ - [OpenAI Blog Post](https://openai.com/blog/gpt-2-1-5b-release/)
37
+
38
+ ## How to Get Started with the Model
39
+
40
+ Use the code below to get started with the model.
41
+
42
+ ```python
43
+ ```
44
+
45
+ ## Uses
46
+
47
+ #### Direct Use
48
+
49
+ The model is a classifier that can be used to detect text generated by GPT-2 models.
50
+
51
+ #### Downstream Use
52
+
53
+ The model's developers have stated that they developed and released the model to help with research related to synthetic text generation, so the model could potentially be used for downstream tasks related to synthetic text generation. See the [associated paper](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf) for further discussion.
54
+
55
+ #### Misuse and Out-of-scope Use
56
+
57
+ The model should not be used to intentionally create hostile or alienating environments for people. In addition, the model developers discuss the risk of adversaries using the model to better evade detection in their [associated paper](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf), suggesting that using the model for evading detection or for supporting efforts to evade detection would be a misuse of the model.
58
+
59
+ ## Risks, Limitations and Biases
60
+
61
+ **CONTENT WARNING: Readers should be aware this section may contain content that is disturbing, offensive, and can propagate historical and current stereotypes.**
62
+
63
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
64
+
65
+ #### Risks and Limitations
66
+
67
+ In their [associated paper](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf), the model developers discuss the risk that the model may be used by bad actors to develop capabilities for evading detection, though one purpose of releasing the model is to help improve detection research.
68
+
69
+ In a related [blog post](https://openai.com/blog/gpt-2-1-5b-release/), the model developers also discuss the limitations of automated methods for detecting synthetic text and the need to pair automated detection tools with other, non-automated approaches. They write:
70
+
71
+ > We conducted in-house detection research and developed a detection model that has detection rates of ~95% for detecting 1.5B GPT-2-generated text. We believe this is not high enough accuracy for standalone detection and needs to be paired with metadata-based approaches, human judgment, and public education to be more effective.
72
+
73
+ The model developers also [report](https://openai.com/blog/gpt-2-1-5b-release/) finding that classifying content from larger models is more difficult, suggesting that detection with automated tools like this model will be increasingly difficult as model sizes increase. The authors find that training detector models on the outputs of larger models can improve accuracy and robustness.
74
+
75
+ #### Bias
76
+
77
+ Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by RoBERTa base and GPT-2 1.5B (which this model is built/fine-tuned on) can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups (see the [RoBERTa base](https://huggingface.co/roberta-base) and [GPT-2 XL](https://huggingface.co/gpt2-xl) model cards for more information). The developers of this model discuss these issues further in their [paper](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf).
78
+
79
+ ## Training
80
+
81
+ #### Training Data
82
+
83
+ The model is a sequence classifier based on RoBERTa base (see the [RoBERTa base model card](https://huggingface.co/roberta-base) for more details on the RoBERTa base training data) and then fine-tuned using the outputs of the 1.5B GPT-2 model (available [here](https://github.com/openai/gpt-2-output-dataset)).
84
+
85
+ #### Training Procedure
86
+
87
+ The model developers write that:
88
+
89
+ > We based a sequence classifier on RoBERTaBASE (125 million parameters) and fine-tuned it to classify the outputs from the 1.5B GPT-2 model versus WebText, the dataset we used to train the GPT-2 model.
90
+
91
+ They later state:
92
+
93
+ > To develop a robust detector model that can accurately classify generated texts regardless of the sampling method, we performed an analysis of the model’s transfer performance.
94
+
95
+ See the [associated paper](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf) for further details on the training procedure.
96
+
97
+ ## Evaluation
98
+
99
+ The following evaluation information is extracted from the [associated paper](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf).
100
+
101
+ #### Testing Data, Factors and Metrics
102
+
103
+ The model is intended to be used for detecting text generated by GPT-2 models, so the model developers test the model on text datasets, measuring accuracy by:
104
+
105
+ > testing 510-token test examples comprised of 5,000 samples from the WebText dataset and 5,000 samples generated by a GPT-2 model, which were not used during the training.
106
+
107
+ #### Results
108
+
109
+ The model developers [find](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf):
110
+
111
+ > Our classifier is able to detect 1.5 billion parameter GPT-2-generated text with approximately 95% accuracy...The model’s accuracy depends on sampling methods used when generating outputs, like temperature, Top-K, and nucleus sampling ([Holtzman et al., 2019](https://arxiv.org/abs/1904.09751). Nucleus sampling outputs proved most difficult to correctly classify, but a detector trained using nucleus sampling transfers well across other sampling methods. As seen in Figure 1 [in the paper], we found consistently high accuracy when trained on nucleus sampling.
112
+
113
+ See the [associated paper](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf), Figure 1 (on page 14) and Figure 2 (on page 16) for full results.
114
+
115
+ ## Environmental Impact
116
+
117
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
118
+
119
+ - **Hardware Type:** Unknown
120
+ - **Hours used:** Unknown
121
+ - **Cloud Provider:** Unknown
122
+ - **Compute Region:** Unknown
123
+ - **Carbon Emitted:** Unknown
124
+
125
+ ## Technical Specifications
126
+
127
+ The model developers write that:
128
+
129
+ See the [associated paper](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf) for further details on the modeling architecture and training details.
130
+
131
+ ## Citation Information
132
+
133
+ ```bibtex
134
+ @article{solaiman2019release,
135
+ title={Release strategies and the social impacts of language models},
136
+ author={Solaiman, Irene and Brundage, Miles and Clark, Jack and Askell, Amanda and Herbert-Voss, Ariel and Wu, Jeff and Radford, Alec and Krueger, Gretchen and Kim, Jong Wook and Kreps, Sarah and others},
137
+ journal={arXiv preprint arXiv:1908.09203},
138
+ year={2019}
139
+ }
140
+ ```
141
+
142
+ APA:
143
+ - Solaiman, I., Brundage, M., Clark, J., Askell, A., Herbert-Voss, A., Wu, J., ... & Wang, J. (2019). Release strategies and the social impacts of language models. arXiv preprint arXiv:1908.09203.