gpt2-jokes

This model is a Fine-tune version of gpt2 on the Fraser/short-jokes dataset. It achieves the following results on the evaluation set:

  • Loss: 0.6748
  • Accuracy: 0.8796

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support