Mistral-7B-0.3_auto
Collection
36 items
•
Updated
This model is a fine-tuned version of mistralai/Mistral-7B-Instruct-v0.3 on the GaetanMichelet/chat-60_ft_task-1_auto dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
1.0663 | 0.8696 | 5 | 1.0320 |
0.887 | 1.9130 | 11 | 0.8524 |
0.7166 | 2.9565 | 17 | 0.7895 |
0.5246 | 4.0 | 23 | 0.8279 |
0.3244 | 4.8696 | 28 | 0.8766 |
0.2036 | 5.9130 | 34 | 1.1719 |
0.069 | 6.9565 | 40 | 1.2153 |
0.0557 | 8.0 | 46 | 1.3490 |
0.0301 | 8.8696 | 51 | 1.4461 |
0.0269 | 9.9130 | 57 | 1.5283 |
Base model
mistralai/Mistral-7B-v0.3