gemma2b-summarize-gpt4o
Collection
9 items
•
Updated
This model is a fine-tuned version of google/gemma-2b on the llama-duo/synth_summarize_dataset_dedup dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
1.1808 | 1.0 | 146 | 2.4876 |
1.0819 | 2.0 | 292 | 2.4820 |
1.035 | 3.0 | 438 | 2.4995 |
0.9796 | 4.0 | 584 | 2.5387 |
0.9366 | 5.0 | 730 | 2.6038 |
0.9051 | 6.0 | 876 | 2.6521 |
0.8676 | 7.0 | 1022 | 2.7249 |
0.8291 | 8.0 | 1168 | 2.7667 |
0.8286 | 9.0 | 1314 | 2.7899 |
0.8185 | 10.0 | 1460 | 2.7931 |
Base model
google/gemma-2b