Llama2-70b-Instruct-finetuned
This model is a fine-tuned version of meta-llama/Meta-Llama-3-70B-Instruct on the generator dataset. It achieves the following results on the evaluation set:
- Loss: 0.6518
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2.5e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 0.03
- training_steps: 500
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
1.4651 | 0.0305 | 10 | 1.4614 |
1.2857 | 0.0610 | 20 | 1.3825 |
1.3614 | 0.0915 | 30 | 1.3106 |
1.1613 | 0.1220 | 40 | 1.2354 |
1.1483 | 0.1524 | 50 | 1.1536 |
1.1123 | 0.1829 | 60 | 1.0586 |
1.0873 | 0.2134 | 70 | 0.9631 |
0.8711 | 0.2439 | 80 | 0.8951 |
0.8709 | 0.2744 | 90 | 0.8485 |
0.8406 | 0.3049 | 100 | 0.8152 |
0.8181 | 0.3354 | 110 | 0.7901 |
0.8111 | 0.3659 | 120 | 0.7741 |
0.7284 | 0.3963 | 130 | 0.7599 |
0.7409 | 0.4268 | 140 | 0.7474 |
0.6901 | 0.4573 | 150 | 0.7374 |
0.6622 | 0.4878 | 160 | 0.7287 |
0.6912 | 0.5183 | 170 | 0.7212 |
0.7425 | 0.5488 | 180 | 0.7156 |
0.7135 | 0.5793 | 190 | 0.7095 |
0.7233 | 0.6098 | 200 | 0.7040 |
0.7314 | 0.6402 | 210 | 0.6985 |
0.659 | 0.6707 | 220 | 0.6916 |
0.6884 | 0.7012 | 230 | 0.6870 |
0.6891 | 0.7317 | 240 | 0.6845 |
0.6736 | 0.7622 | 250 | 0.6813 |
0.6487 | 0.7927 | 260 | 0.6784 |
0.5908 | 0.8232 | 270 | 0.6755 |
0.6864 | 0.8537 | 280 | 0.6728 |
0.6581 | 0.8841 | 290 | 0.6688 |
0.6816 | 0.9146 | 300 | 0.6667 |
0.6503 | 0.9451 | 310 | 0.6648 |
0.6625 | 0.9756 | 320 | 0.6626 |
0.6392 | 1.0061 | 330 | 0.6616 |
0.6319 | 1.0366 | 340 | 0.6613 |
0.6228 | 1.0671 | 350 | 0.6613 |
0.5918 | 1.0976 | 360 | 0.6606 |
0.6028 | 1.1280 | 370 | 0.6589 |
0.6563 | 1.1585 | 380 | 0.6569 |
0.6154 | 1.1890 | 390 | 0.6556 |
0.5797 | 1.2195 | 400 | 0.6545 |
0.6137 | 1.25 | 410 | 0.6538 |
0.6174 | 1.2805 | 420 | 0.6533 |
0.5981 | 1.3110 | 430 | 0.6528 |
0.5793 | 1.3415 | 440 | 0.6526 |
0.5626 | 1.3720 | 450 | 0.6523 |
0.5864 | 1.4024 | 460 | 0.6520 |
0.5874 | 1.4329 | 470 | 0.6519 |
0.6221 | 1.4634 | 480 | 0.6518 |
0.5727 | 1.4939 | 490 | 0.6517 |
0.5776 | 1.5244 | 500 | 0.6518 |
Framework versions
- PEFT 0.10.1.dev0
- Transformers 4.41.0.dev0
- Pytorch 2.1.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
- Downloads last month
- 3
Model tree for Utshav/Llama3-70b-Instruct-extractor-adaptor
Base model
meta-llama/Meta-Llama-3-70B
Finetuned
meta-llama/Meta-Llama-3-70B-Instruct