|
### Model information |
|
|
|
Fine tuning data 1: https://www.kaggle.com/andradaolteanu/rickmorty-scripts |
|
Base model: e-tony/gpt2-rnm |
|
Epoch: 3 |
|
Train runtime: 7.1779 secs |
|
Loss: 2.5694 |
|
|
|
|
|
|
|
Training notebook: [Colab](https://colab.research.google.com/drive/12NvO1SIZevF8ybJqfN9O21I3i9bU1dOO#scrollTo=KUsyn02WWmf5) |
|
|
|
|
|
### ===Teachable NLP=== ### |
|
|
|
To train a GPT-2 model, write code and require GPU resources, but can easily fine-tune and get an API to use the model here for free. |
|
|
|
Teachable NLP: [Teachable NLP](https://ainize.ai/teachable-nlp) |
|
|
|
Tutorial: [Tutorial](https://forum.ainetwork.ai/t/teachable-nlp-how-to-use-teachable-nlp/65?utm_source=community&utm_medium=huggingface&utm_campaign=model&utm_content=teachable%20nlp) |
|
|