bert_base_tcm_teste / README.md
ricardo-filho's picture
update model card README.md
1a37cac
metadata
license: mit
tags:
  - generated_from_trainer
model-index:
  - name: bert_base_tcm_teste
    results: []

bert_base_tcm_teste

This model is a fine-tuned version of neuralmind/bert-base-portuguese-cased on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0192
  • Criterio Julgamento Precision: 0.7209
  • Criterio Julgamento Recall: 0.8942
  • Criterio Julgamento F1: 0.7983
  • Criterio Julgamento Number: 104
  • Data Sessao Precision: 0.6351
  • Data Sessao Recall: 0.8545
  • Data Sessao F1: 0.7287
  • Data Sessao Number: 55
  • Modalidade Licitacao Precision: 0.9224
  • Modalidade Licitacao Recall: 0.9596
  • Modalidade Licitacao F1: 0.9406
  • Modalidade Licitacao Number: 421
  • Numero Exercicio Precision: 0.8872
  • Numero Exercicio Recall: 0.9351
  • Numero Exercicio F1: 0.9105
  • Numero Exercicio Number: 185
  • Objeto Licitacao Precision: 0.2348
  • Objeto Licitacao Recall: 0.4576
  • Objeto Licitacao F1: 0.3103
  • Objeto Licitacao Number: 59
  • Valor Objeto Precision: 0.5424
  • Valor Objeto Recall: 0.7805
  • Valor Objeto F1: 0.64
  • Valor Objeto Number: 41
  • Overall Precision: 0.7683
  • Overall Recall: 0.8971
  • Overall F1: 0.8277
  • Overall Accuracy: 0.9948

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 50.0

Training results

Training Loss Epoch Step Validation Loss Criterio Julgamento Precision Criterio Julgamento Recall Criterio Julgamento F1 Criterio Julgamento Number Data Sessao Precision Data Sessao Recall Data Sessao F1 Data Sessao Number Modalidade Licitacao Precision Modalidade Licitacao Recall Modalidade Licitacao F1 Modalidade Licitacao Number Numero Exercicio Precision Numero Exercicio Recall Numero Exercicio F1 Numero Exercicio Number Objeto Licitacao Precision Objeto Licitacao Recall Objeto Licitacao F1 Objeto Licitacao Number Valor Objeto Precision Valor Objeto Recall Valor Objeto F1 Valor Objeto Number Overall Precision Overall Recall Overall F1 Overall Accuracy
0.0346 0.96 2750 0.0329 0.6154 0.8462 0.7126 104 0.5495 0.9091 0.6849 55 0.8482 0.9287 0.8866 421 0.7438 0.9730 0.8431 185 0.0525 0.3220 0.0903 59 0.4762 0.7317 0.5769 41 0.5565 0.8763 0.6807 0.9880
0.0309 1.92 5500 0.0322 0.6694 0.7788 0.72 104 0.5976 0.8909 0.7153 55 0.9178 0.9549 0.9360 421 0.8211 0.8432 0.8320 185 0.15 0.2034 0.1727 59 0.2203 0.3171 0.26 41 0.7351 0.8243 0.7771 0.9934
0.0179 2.88 8250 0.0192 0.7209 0.8942 0.7983 104 0.6351 0.8545 0.7287 55 0.9224 0.9596 0.9406 421 0.8872 0.9351 0.9105 185 0.2348 0.4576 0.3103 59 0.5424 0.7805 0.64 41 0.7683 0.8971 0.8277 0.9948
0.0174 3.84 11000 0.0320 0.7522 0.8173 0.7834 104 0.5741 0.5636 0.5688 55 0.8881 0.9430 0.9147 421 0.8490 0.8811 0.8647 185 0.2436 0.3220 0.2774 59 0.5370 0.7073 0.6105 41 0.7719 0.8370 0.8031 0.9946
0.0192 4.8 13750 0.0261 0.6744 0.8365 0.7468 104 0.6190 0.7091 0.6610 55 0.9169 0.9430 0.9297 421 0.8404 0.8541 0.8472 185 0.2059 0.3559 0.2609 59 0.5088 0.7073 0.5918 41 0.7521 0.8451 0.7959 0.9949
0.0158 5.76 16500 0.0250 0.6641 0.8173 0.7328 104 0.5610 0.8364 0.6715 55 0.9199 0.9549 0.9371 421 0.9167 0.9514 0.9337 185 0.1912 0.4407 0.2667 59 0.4828 0.6829 0.5657 41 0.7386 0.8821 0.8040 0.9948
0.0126 6.72 19250 0.0267 0.6694 0.7981 0.7281 104 0.6386 0.9636 0.7681 55 0.8723 0.9572 0.9128 421 0.8812 0.9622 0.9199 185 0.2180 0.4915 0.3021 59 0.5323 0.8049 0.6408 41 0.7308 0.9006 0.8068 0.9945
0.0162 7.68 22000 0.0328 0.675 0.7788 0.7232 104 0.6604 0.6364 0.6481 55 0.9263 0.9549 0.9404 421 0.8535 0.9135 0.8825 185 0.2471 0.3559 0.2917 59 0.5091 0.6829 0.5833 41 0.7788 0.8509 0.8133 0.9948

Framework versions

  • Transformers 4.21.0.dev0
  • Pytorch 1.11.0+cu113
  • Datasets 2.3.2
  • Tokenizers 0.12.1