Jong Wook Kim
Update the blog and report links
12459ab unverified
|
raw
history blame
No virus
1.94 kB

GPT-2 Output Detector

This directory contains the code for working with the GPT-2 output detector model, obtained by fine-tuning a RoBERTa model with the outputs of the 1.5B-parameter GPT-2 model. For motivations and discussions regarding the release of this detector model, please check out our blog post and report.

Downloading a pre-trained detector model

Download the weights for the fine-tuned roberta-base model (478 MB):

wget https://storage.googleapis.com/gpt-2/detector-models/v1/detector-base.pt

or roberta-large model (1.5 GB):

wget https://storage.googleapis.com/gpt-2/detector-models/v1/detector-large.pt

These RoBERTa-based models are fine-tuned with a mixture of temperature-1 and nucleus sampling outputs, which should generalize well to outputs generated using different sampling methods.

Running a detector model

You can launch a web UI in which you can enter a text and see the detector model's prediction on whether or not it was generated by a GPT-2 model.

# (on the top-level directory of this repository)
pip install -r requirements.txt
python -m detector.server detector-base.pt

After the script says "Ready to serve", nagivate to http://localhost:8080 to view the UI.

Training a new detector model

You can use the provided training script to train a detector model on a new set of datasets. We recommend using a GPU machine for this task.

# (on the top-level directory of this repository)
pip install -r requirements.txt
python -m detector.train

The training script supports a number of different options; append --help to the command above for usage.