Model information

Fine tuning data 1:
Fine tuning data 2:
Base model: e-tony/gpt2-rnm
Epoch: 2
Train runtime: 790.0612 secs
Loss: 2.8569

API page: Ainize

Demo page: End-point

===Teachable NLP===

To train a GPT-2 model, write code and require GPU resources, but can easily fine-tune and get an API to use the model here for free.

Teachable NLP: Teachable NLP

Tutorial: Tutorial

Downloads last month
Hosted inference API
Text Generation
This model can be loaded on the Inference API on-demand.