Edit model card

We finetuned gptj-6b on Code-Alpaca-Instruct Dataset (vicgalle/alpaca-gpt4) for 10 epochs or ~ 50,000 steps using MonsterAPI no-code LLM finetuner.

This dataset is vicgalle/alpaca-gpt4 unfiltered,

The finetuning session got completed in 7 hours and costed us only $25 for the entire finetuning run!

Hyperparameters & Run details:

  • Model Path: vicgalle/alpaca-gpt4
  • Dataset: vicgalle/alpaca-gpt4
  • Learning rate: 0.0003
  • Number of epochs: 5
  • Data split: Training: 90% / Validation: 10%
  • Gradient accumulation steps: 1

license: apache-2.0

Downloads last month
795
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for monsterapi/Gptj-6b_alpaca-gpt4

Adapter
(29)
this model

Dataset used to train monsterapi/Gptj-6b_alpaca-gpt4