Edit model card

BERTimbau-base_LeNER-Br

This model is a fine-tuned version of neuralmind/bert-base-portuguese-cased on the lener_br dataset. It achieves the following results on the evaluation set:

  • Loss: nan
  • Precision: 0.8318
  • Recall: 0.8839
  • F1: 0.8571
  • Accuracy: 0.9754

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss Precision Recall F1 Accuracy
0.2037 1.0 979 nan 0.7910 0.8762 0.8314 0.9721
0.0308 2.0 1958 nan 0.7747 0.8663 0.8180 0.9698
0.02 3.0 2937 nan 0.8316 0.8911 0.8603 0.9801
0.0133 4.0 3916 nan 0.8038 0.8812 0.8407 0.9728
0.0111 5.0 4895 nan 0.8253 0.8707 0.8474 0.9753
0.0078 6.0 5874 nan 0.8235 0.8779 0.8498 0.9711
0.0057 7.0 6853 nan 0.8174 0.8768 0.8461 0.9760
0.0032 8.0 7832 nan 0.8113 0.8845 0.8463 0.9769
0.0027 9.0 8811 nan 0.8344 0.8867 0.8597 0.9767
0.0023 10.0 9790 nan 0.8318 0.8839 0.8571 0.9754

Testing results

metrics={'test_loss': 0.0710107609629631, 'test_precision': 0.8785578747628083, 'test_recall': 0.9138157894736842, 'test_f1': 0.8958400515962593, 'test_accuracy': 0.9884423662270061, 'test_runtime': 12.4395, 'test_samples_per_second': 111.741, 'test_steps_per_second': 13.988})

Framework versions

  • Transformers 4.41.2
  • Pytorch 2.3.0+cu121
  • Datasets 2.20.0
  • Tokenizers 0.19.1
Downloads last month
1
Safetensors
Model size
108M params
Tensor type
F32
·
Inference API
This model can be loaded on Inference API (serverless).

Finetuned from

Dataset used to train CassioBN/BERTimbau-base_LeNER-Br

Evaluation results