llama-7b-finnish-instruct-v0.2_Fi__components_size_252_epochs_10_2024-06-21_09-35-06_3556544
This model is a fine-tuned version of Finnish-NLP/llama-7b-finnish-instruct-v0.2 on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.4970
- Accuracy: 0.77
- Chrf: 0.504
- Bleu: 0.417
- Sacrebleu: 0.4
- Rouge1: 0.566
- Rouge2: 0.386
- Rougel: 0.552
- Rougelsum: 0.554
- Meteor: 0.59
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 3407
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 4
- total_eval_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-06
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 252
- training_steps: 2520
Training results
Training Loss | Epoch | Step | Validation Loss | Accuracy | Chrf | Bleu | Sacrebleu | Rouge1 | Rouge2 | Rougel | Rougelsum | Meteor |
---|---|---|---|---|---|---|---|---|---|---|---|---|
0.0297 | 4.0 | 252 | 1.3305 | 0.775 | 0.207 | 0.139 | 0.1 | 0.407 | 0.272 | 0.4 | 0.402 | 0.476 |
0.0414 | 8.0 | 504 | 0.9221 | 0.772 | 0.319 | 0.23 | 0.2 | 0.484 | 0.333 | 0.477 | 0.48 | 0.563 |
0.0472 | 12.0 | 756 | 0.8856 | 0.775 | 0.364 | 0.261 | 0.3 | 0.496 | 0.322 | 0.482 | 0.487 | 0.569 |
1.2941 | 16.0 | 1008 | 0.9528 | 0.772 | 0.349 | 0.265 | 0.3 | 0.479 | 0.327 | 0.471 | 0.474 | 0.552 |
0.1395 | 20.0 | 1260 | 0.8777 | 0.771 | 0.384 | 0.284 | 0.3 | 0.508 | 0.338 | 0.495 | 0.495 | 0.567 |
0.3282 | 24.0 | 1512 | 0.7412 | 0.771 | 0.403 | 0.312 | 0.3 | 0.514 | 0.336 | 0.504 | 0.508 | 0.568 |
0.0135 | 28.0 | 1764 | 0.7096 | 0.77 | 0.409 | 0.309 | 0.3 | 0.532 | 0.367 | 0.524 | 0.522 | 0.573 |
0.1001 | 32.0 | 2016 | 0.6087 | 0.77 | 0.451 | 0.362 | 0.4 | 0.544 | 0.373 | 0.529 | 0.53 | 0.578 |
0.0189 | 36.0 | 2268 | 0.5685 | 0.77 | 0.458 | 0.363 | 0.4 | 0.535 | 0.357 | 0.523 | 0.525 | 0.604 |
0.0168 | 40.0 | 2520 | 0.4970 | 0.77 | 0.504 | 0.417 | 0.4 | 0.566 | 0.386 | 0.552 | 0.554 | 0.59 |
Framework versions
- Transformers 4.37.0
- Pytorch 2.2.1+cu121
- Datasets 2.20.0
- Tokenizers 0.15.2
- Downloads last month
- 2
Model is too large to load in Inference API (serverless). To try the model, launch it on Inference Endpoints (dedicated)
instead.