llama-13b_alpaca-clean_l0.0002_64 / train_results.json
alexander-hm's picture
End of training
daae4c6 verified
raw
history blame
210 Bytes
{
"epoch": 3.0,
"total_flos": 2.1746709092395008e+18,
"train_loss": 1.2714244133601815,
"train_runtime": 406836.4039,
"train_samples_per_second": 0.374,
"train_steps_per_second": 0.023
}