hib30_0524_epoch_4

This model is a fine-tuned version of projecte-aina/roberta-base-ca-v2-cased-te on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.3876
  • Accuracy: 0.955
  • Precision: 0.9553
  • Recall: 0.955
  • F1: 0.9550
  • Ratio: 0.487

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 47
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 32
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.06
  • lr_scheduler_warmup_steps: 4
  • num_epochs: 1
  • label_smoothing_factor: 0.1

Training results

Training Loss Epoch Step Validation Loss Accuracy Precision Recall F1 Ratio
0.3491 0.04 10 0.3923 0.951 0.9510 0.951 0.9510 0.495
0.3703 0.08 20 0.3979 0.954 0.9550 0.954 0.9540 0.476
0.3298 0.12 30 0.4131 0.95 0.9500 0.95 0.9500 0.498
0.3453 0.16 40 0.4259 0.948 0.9489 0.948 0.9480 0.478
0.3714 0.2 50 0.4134 0.951 0.9523 0.9510 0.9510 0.473
0.3345 0.24 60 0.4098 0.949 0.9490 0.949 0.9490 0.495
0.3626 0.28 70 0.3956 0.949 0.9490 0.949 0.9490 0.503
0.3712 0.32 80 0.3853 0.958 0.9587 0.958 0.9580 0.48
0.3403 0.36 90 0.3945 0.954 0.9542 0.954 0.9540 0.49
0.3592 0.4 100 0.4063 0.951 0.9510 0.951 0.9510 0.505
0.3839 0.44 110 0.3904 0.954 0.9552 0.954 0.9540 0.474
0.3685 0.48 120 0.3999 0.949 0.9512 0.9490 0.9489 0.465
0.368 0.52 130 0.3817 0.958 0.9583 0.958 0.9580 0.488
0.3658 0.56 140 0.3862 0.957 0.9572 0.957 0.9570 0.489
0.3752 0.6 150 0.4040 0.954 0.9561 0.954 0.9539 0.466
0.3376 0.64 160 0.3977 0.956 0.9572 0.956 0.9560 0.474
0.3531 0.68 170 0.3943 0.958 0.9587 0.958 0.9580 0.48
0.3433 0.72 180 0.4013 0.956 0.9576 0.956 0.9560 0.47
0.396 0.76 190 0.3928 0.955 0.9557 0.9550 0.9550 0.481
0.3993 0.8 200 0.3895 0.955 0.9555 0.955 0.9550 0.483
0.3738 0.84 210 0.3865 0.955 0.9553 0.955 0.9550 0.487
0.334 0.88 220 0.3872 0.954 0.9544 0.954 0.9540 0.486
0.4014 0.92 230 0.3880 0.955 0.9553 0.955 0.9550 0.487
0.4279 0.96 240 0.3878 0.955 0.9553 0.955 0.9550 0.487
0.358 1.0 250 0.3876 0.955 0.9553 0.955 0.9550 0.487

Framework versions

  • Transformers 4.41.1
  • Pytorch 2.3.0+cu121
  • Datasets 2.19.1
  • Tokenizers 0.19.1

METRICS REPORT precision recall f1-score top1-score top2-score top3-score good1-score good2-score support 0 Aig眉es 1.000 0.960 0.980 0.960 0.960 1.000 0.960 0.960 25 1 Consum, comer莽 i mercats 0.852 0.920 0.885 0.920 1.000 1.000 1.000 1.000 25 2 Cultura 0.917 0.880 0.898 0.880 0.960 1.000 0.960 0.960 25 3 Economia 0.792 0.760 0.776 0.760 0.920 0.960 0.920 0.920 25 4 Educaci贸 0.852 0.920 0.885 0.920 1.000 1.000 1.000 1.000 25 5 Enllumenat p煤blic 0.920 0.920 0.920 0.920 1.000 1.000 1.000 1.000 25 6 Esports 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 25 7 Habitatge 0.667 0.800 0.727 0.800 0.840 0.880 0.840 0.840 25 8 Horta 0.913 0.840 0.875 0.840 0.960 1.000 0.920 0.920 25 9 Informaci贸 general 0.750 0.600 0.667 0.600 0.960 1.000 0.920 0.960 25 10 Inform脿tica 0.947 0.720 0.818 0.720 0.960 0.960 0.960 0.960 25 11 Joventut 0.913 0.840 0.875 0.840 1.000 1.000 1.000 1.000 25 12 Medi ambient 0.882 0.600 0.714 0.600 0.960 0.960 0.920 0.920 25 13 Neteja de la via p煤blica 0.792 0.760 0.776 0.760 0.960 1.000 1.000 1.000 25 14 Salut p煤blica i Cementiri 0.880 0.880 0.880 0.880 1.000 1.000 1.000 1.000 25 15 Seguretat 0.909 0.800 0.851 0.800 1.000 1.000 1.000 1.000 25 16 Serveis socials 0.857 0.960 0.906 0.960 1.000 1.000 1.000 1.000 25 17 Tramitacions 0.677 0.840 0.750 0.840 1.000 1.000 0.960 0.960 25 18 Urbanisme 0.864 0.760 0.809 0.760 0.880 0.920 0.920 0.920 25 19 Via p煤blica i mobilitat 0.575 0.920 0.708 0.920 0.960 1.000 1.000 1.000 25 macro avg 0.848 0.834 0.835 0.834 0.966 0.984 0.964 0.966 500 weighted avg 0.848 0.834 0.835 0.834 0.966 0.984 0.964 0.966 500 accuracy 0.834 error rate 0.166

Downloads last month
15
Safetensors
Model size
125M params
Tensor type
F32
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for adriansanz/te-zsc-hybrid

Finetuned
(30)
this model