roleplay_alpaca
This model was trained from scratch on the TokenBender/roleplay_alpaca dataset. It achieves the following results on the evaluation set:
- Loss: 1.2973
Model description
Dataset trained on mistralai/Mistral-7B-v0.1
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00065
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_steps: 10
- num_epochs: 1
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
1.5404 | 0.03 | 20 | 1.2764 |
1.4523 | 0.06 | 40 | 1.2571 |
1.5608 | 0.09 | 60 | 1.2465 |
1.5921 | 0.12 | 80 | 1.2575 |
1.6043 | 0.15 | 100 | 1.2432 |
1.5496 | 0.18 | 120 | 1.2504 |
1.348 | 0.21 | 140 | 1.2452 |
1.4638 | 0.24 | 160 | 1.2661 |
1.5733 | 0.27 | 180 | 1.2548 |
1.5397 | 0.3 | 200 | 1.2674 |
1.6154 | 0.33 | 220 | 1.2626 |
1.5058 | 0.36 | 240 | 1.2672 |
1.3974 | 0.4 | 260 | 1.2659 |
1.6654 | 0.43 | 280 | 1.2648 |
1.8051 | 0.46 | 300 | 1.2585 |
1.7487 | 0.49 | 320 | 1.2736 |
1.3612 | 0.52 | 340 | 1.2717 |
1.5048 | 0.55 | 360 | 1.2809 |
1.7134 | 0.58 | 380 | 1.2885 |
1.5524 | 0.61 | 400 | 1.2805 |
1.3705 | 0.64 | 420 | 1.2860 |
1.4335 | 0.67 | 440 | 1.2896 |
1.3642 | 0.7 | 460 | 1.2911 |
1.6546 | 0.73 | 480 | 1.2888 |
1.5345 | 0.76 | 500 | 1.2973 |
1.5968 | 0.79 | 520 | 1.2885 |
1.5694 | 0.82 | 540 | 1.2939 |
1.5474 | 0.85 | 560 | 1.2892 |
1.6981 | 0.88 | 580 | 1.2949 |
1.5451 | 0.91 | 600 | 1.2886 |
1.5845 | 0.94 | 620 | 1.2941 |
1.5143 | 0.97 | 640 | 1.2973 |
Framework versions
- Transformers 4.34.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.0
If you want to support me, you can here.
- Downloads last month
- 4
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support