metadata
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: mistralai/Mistral-7B-v0.1
metrics:
- accuracy
model-index:
- name: lex_glue_ledgar_2
results: []
lex_glue_ledgar_2
This model is a fine-tuned version of mistralai/Mistral-7B-v0.1 on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.5472
- Accuracy: 0.846
- F1 Macro: 0.7622
- F1 Micro: 0.846
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 64
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
Training results
Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Macro | F1 Micro |
---|---|---|---|---|---|---|
0.8576 | 0.27 | 250 | 0.9224 | 0.7704 | 0.6393 | 0.7704 |
0.7735 | 0.53 | 500 | 0.7367 | 0.806 | 0.6941 | 0.806 |
0.7498 | 0.8 | 750 | 0.6500 | 0.8211 | 0.7187 | 0.8211 |
0.4705 | 1.07 | 1000 | 0.6080 | 0.8341 | 0.7484 | 0.8341 |
0.4717 | 1.33 | 1250 | 0.6027 | 0.8364 | 0.7470 | 0.8364 |
0.4793 | 1.6 | 1500 | 0.5638 | 0.8418 | 0.7537 | 0.8418 |
0.4884 | 1.87 | 1750 | 0.5472 | 0.846 | 0.7622 | 0.846 |
0.2172 | 2.13 | 2000 | 0.5798 | 0.8515 | 0.7693 | 0.8515 |
0.224 | 2.4 | 2250 | 0.6039 | 0.8525 | 0.7700 | 0.8525 |
0.1555 | 2.67 | 2500 | 0.5900 | 0.8557 | 0.7764 | 0.8557 |
0.1949 | 2.93 | 2750 | 0.5838 | 0.8578 | 0.7807 | 0.8578 |
Framework versions
- PEFT 0.9.0
- Transformers 4.39.0.dev0
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2