bert_base_tcm_teste / README.md
ricardo-filho's picture
update model card README.md
ed3e410
|
raw
history blame
11.5 kB
metadata
license: mit
tags:
  - generated_from_trainer
model-index:
  - name: bert_base_tcm_teste
    results: []

bert_base_tcm_teste

This model is a fine-tuned version of neuralmind/bert-base-portuguese-cased on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0155
  • Criterio Julgamento Precision: 0.7965
  • Criterio Julgamento Recall: 0.8654
  • Criterio Julgamento F1: 0.8295
  • Criterio Julgamento Number: 104
  • Data Sessao Precision: 0.7162
  • Data Sessao Recall: 0.9636
  • Data Sessao F1: 0.8217
  • Data Sessao Number: 55
  • Modalidade Licitacao Precision: 0.9554
  • Modalidade Licitacao Recall: 0.9667
  • Modalidade Licitacao F1: 0.9610
  • Modalidade Licitacao Number: 421
  • Numero Exercicio Precision: 0.9323
  • Numero Exercicio Recall: 0.9676
  • Numero Exercicio F1: 0.9496
  • Numero Exercicio Number: 185
  • Objeto Licitacao Precision: 0.5270
  • Objeto Licitacao Recall: 0.6610
  • Objeto Licitacao F1: 0.5865
  • Objeto Licitacao Number: 59
  • Valor Objeto Precision: 0.8444
  • Valor Objeto Recall: 0.9268
  • Valor Objeto F1: 0.8837
  • Valor Objeto Number: 41
  • Overall Precision: 0.8723
  • Overall Recall: 0.9318
  • Overall F1: 0.9011
  • Overall Accuracy: 0.9966

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-06
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 50.0

Training results

Training Loss Epoch Step Validation Loss Criterio Julgamento Precision Criterio Julgamento Recall Criterio Julgamento F1 Criterio Julgamento Number Data Sessao Precision Data Sessao Recall Data Sessao F1 Data Sessao Number Modalidade Licitacao Precision Modalidade Licitacao Recall Modalidade Licitacao F1 Modalidade Licitacao Number Numero Exercicio Precision Numero Exercicio Recall Numero Exercicio F1 Numero Exercicio Number Objeto Licitacao Precision Objeto Licitacao Recall Objeto Licitacao F1 Objeto Licitacao Number Valor Objeto Precision Valor Objeto Recall Valor Objeto F1 Valor Objeto Number Overall Precision Overall Recall Overall F1 Overall Accuracy
0.0193 0.96 2750 0.0190 0.7016 0.8365 0.7632 104 0.6585 0.9818 0.7883 55 0.9446 0.9715 0.9578 421 0.9036 0.9622 0.9319 185 0.2261 0.4407 0.2989 59 0.7 0.8537 0.7692 41 0.7882 0.9121 0.8457 0.9946
0.0165 1.92 5500 0.0133 0.7203 0.8173 0.7658 104 0.675 0.9818 0.8 55 0.9447 0.9739 0.9591 421 0.9430 0.9838 0.9630 185 0.4691 0.6441 0.5429 59 0.8043 0.9024 0.8506 41 0.8466 0.9318 0.8872 0.9964
0.0089 2.88 8250 0.0150 0.7636 0.8077 0.7850 104 0.7895 0.8182 0.8036 55 0.9491 0.9739 0.9613 421 0.9282 0.9784 0.9526 185 0.4444 0.6102 0.5143 59 0.8636 0.9268 0.8941 41 0.8640 0.9179 0.8901 0.9965
0.0066 3.84 11000 0.0150 0.7692 0.8654 0.8145 104 0.7333 0.8 0.7652 55 0.9464 0.9644 0.9553 421 0.9278 0.9730 0.9499 185 0.5 0.6780 0.5755 59 0.7708 0.9024 0.8315 41 0.8588 0.9214 0.8890 0.9966
0.0055 4.8 13750 0.0176 0.75 0.8654 0.8036 104 0.7903 0.8909 0.8376 55 0.9490 0.9715 0.9601 421 0.9326 0.9730 0.9524 185 0.4568 0.6271 0.5286 59 0.7872 0.9024 0.8409 41 0.8587 0.9272 0.8916 0.9963
0.0066 5.76 16500 0.0155 0.7965 0.8654 0.8295 104 0.7162 0.9636 0.8217 55 0.9554 0.9667 0.9610 421 0.9323 0.9676 0.9496 185 0.5270 0.6610 0.5865 59 0.8444 0.9268 0.8837 41 0.8723 0.9318 0.9011 0.9966
0.0031 6.72 19250 0.0181 0.775 0.8942 0.8304 104 0.7879 0.9455 0.8595 55 0.9533 0.9691 0.9611 421 0.9326 0.9730 0.9524 185 0.4875 0.6610 0.5612 59 0.8261 0.9268 0.8736 41 0.8682 0.9364 0.9010 0.9965
0.0066 7.68 22000 0.0192 0.7798 0.8173 0.7981 104 0.6986 0.9273 0.7969 55 0.9353 0.9620 0.9485 421 0.8995 0.9676 0.9323 185 0.4 0.5763 0.4722 59 0.7551 0.9024 0.8222 41 0.8344 0.9145 0.8726 0.9961
0.0052 8.64 24750 0.0201 0.8036 0.8654 0.8333 104 0.7869 0.8727 0.8276 55 0.9465 0.9667 0.9565 421 0.9326 0.9730 0.9524 185 0.5060 0.7119 0.5915 59 0.8043 0.9024 0.8506 41 0.8692 0.9295 0.8983 0.9966
0.0015 9.61 27500 0.0202 0.7838 0.8365 0.8093 104 0.7313 0.8909 0.8033 55 0.9482 0.9572 0.9527 421 0.9326 0.9730 0.9524 185 0.4865 0.6102 0.5414 59 0.8043 0.9024 0.8506 41 0.8646 0.9156 0.8894 0.9966
0.0015 10.57 30250 0.0225 0.7798 0.8173 0.7981 104 0.6912 0.8545 0.7642 55 0.9508 0.9644 0.9575 421 0.9375 0.9730 0.9549 185 0.5395 0.6949 0.6074 59 0.8478 0.9512 0.8966 41 0.8693 0.9225 0.8951 0.9964

Framework versions

  • Transformers 4.21.0.dev0
  • Pytorch 1.11.0+cu113
  • Datasets 2.3.2
  • Tokenizers 0.12.1