Back to all models
text-generation mask_token:
Query this model
🔥 This model is currently loaded and running on the Inference API. ⚠️ This model could not be loaded by the inference API. ⚠️ This model can be loaded on the Inference API on-demand.
JSON Output
API endpoint
								$
								curl -X POST \
-H "Authorization: Bearer YOUR_ORG_OR_USER_API_TOKEN" \
-H "Content-Type: application/json" \
-d '"json encoded string"' \
https://api-inference.huggingface.co/models/lvwerra/gpt2-imdb-pos
Share Copied link to clipboard

Monthly model downloads

lvwerra/gpt2-imdb-pos lvwerra/gpt2-imdb-pos
39 downloads
last 30 days

pytorch

tf

Contributed by

lvwerra Leandro von Werra
5 models

How to use this model directly from the 🤗/transformers library:

			
Copy to clipboard
from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("lvwerra/gpt2-imdb-pos") model = AutoModelWithLMHead.from_pretrained("lvwerra/gpt2-imdb-pos")

GPT2-IMDB-pos

What is it?

A small GPT2 (lvwerra/gpt2-imdb) language model fine-tuned to produce positive movie reviews based the IMDB dataset. The model is trained with rewards from a BERT sentiment classifier (lvwerra/gpt2-imdb) via PPO.

Training setting

The model was trained for 100 optimisation steps with a batch size of 256 which corresponds to 25600 training samples. The full experiment setup can be found in the Jupyter notebook in the trl repo.

Examples

A few examples of the model response to a query before and after optimisation:

query response (before) response (after) rewards (before) rewards (after)
I'd never seen a heavier, woodier example of Victorian archite... film of this caliber, and I think it's wonder... 3.297736 4.158653
I love John's work but I actually have to write language as in w... and I hereby recommend this film. I am really... -1.904006 4.159198
I's a big struggle to see anyone who acts in that way. by Jim Th... , but overall I'm happy with the changes even ... -1.595925 2.651260