Edit model card

GPT-Greentext-355m

A finetuned version of GPT2-Medium on the 'greentext' dataset. (Linked above) A demo is available here The demo playground is recommended over the inference box on the right.

The largest model in this series is located here: GPT-Greentext-1.5b

Training Procedure

This was trained on the 'greentext' dataset, using the "HappyTransformers" library on Google Colab. This model was trained for 15 epochs with learning rate 1e-2.

Biases & Limitations

This likely contains the same biases and limitations as the original GPT2 that it is based on, and additionally heavy biases from the greentext dataset. It likely will generate offensive output.

Intended Use

This model is meant for fun, nothing else.

Sample Use

#Import model:
from happytransformer import HappyGeneration
happy_gen = HappyGeneration("GPT2", "DarwinAnim8or/GPT-Greentext-355m")

#Set generation settings:
from happytransformer import GENSettings
args_top_k = GENSettingsGENSettings(no_repeat_ngram_size=3, do_sample=True, top_k=80, temperature=0.8, max_length=150, early_stopping=False)

#Generate a response:
result = happy_gen.generate_text(""">be me
>""", args=args_top_k)

print(result)
print(result.text)
Downloads last month
270
Safetensors
Model size
380M params
Tensor type
F32
·
U8
·

Dataset used to train DarwinAnim8or/GPT-Greentext-355m