gpt2-jokes
This model is a Fine-tune version of gpt2 on the Fraser/short-jokes dataset. It achieves the following results on the evaluation set:
- Loss: 0.6748
- Accuracy: 0.8796
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
HF Inference deployability: The model has no library tag.