bert-finetuning-italian
This model is a fine-tuned version of bert-base-uncased on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.5598
- Model Preparation Time: 0.0031
- Accuracy: 0.7748
- F1 Macro: 0.7797
- Precision Macro: 0.7863
- Recall Macro: 0.7775
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2.1612703354421325e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_ratio: 0.051758482154894515
- num_epochs: 10
Training results
Training Loss | Epoch | Step | Validation Loss | Model Preparation Time | Accuracy | F1 Macro | Precision Macro | Recall Macro |
---|---|---|---|---|---|---|---|---|
1.0928 | 1.0 | 735 | 0.7222 | 0.0031 | 0.6878 | 0.6933 | 0.7207 | 0.6978 |
0.6827 | 2.0 | 1470 | 0.6133 | 0.0031 | 0.7381 | 0.7456 | 0.7532 | 0.7479 |
0.473 | 3.0 | 2205 | 0.6064 | 0.0031 | 0.7544 | 0.7638 | 0.7708 | 0.7602 |
0.4259 | 4.0 | 2940 | 0.7100 | 0.0031 | 0.7469 | 0.7578 | 0.7640 | 0.7534 |
0.3187 | 5.0 | 3675 | 0.7481 | 0.0031 | 0.7524 | 0.7586 | 0.7631 | 0.7587 |
Framework versions
- Transformers 4.49.0
- Pytorch 2.5.1+cu124
- Datasets 3.3.2
- Tokenizers 0.21.0
- Downloads last month
- 30
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
Model tree for msab97/bert-finetuning-italian
Base model
google-bert/bert-base-uncased