lettuce_pos_fr_mono / README.md
pranaydeeps's picture
Upload folder using huggingface_hub
42e474d verified
metadata
license: mit
tags:
  - generated_from_trainer
metrics:
  - precision
  - recall
  - f1
  - accuracy
model-index:
  - name: pos_final_mono_fr
    results: []

pos_final_mono_fr

This model is a fine-tuned version of almanach/camembert-base on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.5416
  • Precision: 0.9742
  • Recall: 0.9745
  • F1: 0.9743
  • Accuracy: 0.9768

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 256
  • eval_batch_size: 256
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 1024
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • num_epochs: 40.0
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Precision Recall F1 Accuracy
No log 0.95 14 3.6697 0.0210 0.0194 0.0201 0.0215
No log 1.95 28 3.6329 0.0513 0.0484 0.0498 0.0511
No log 2.95 42 3.5739 0.1142 0.1086 0.1113 0.1267
No log 3.95 56 3.4791 0.2535 0.1976 0.2221 0.3061
No log 4.95 70 3.3377 0.3393 0.2029 0.2539 0.3788
No log 5.95 84 3.1886 0.3737 0.1401 0.2038 0.3427
No log 6.95 98 3.0505 0.4342 0.3211 0.3692 0.4600
No log 7.95 112 2.8996 0.5160 0.4319 0.4702 0.5282
No log 8.95 126 2.7485 0.5617 0.4878 0.5222 0.5732
No log 9.95 140 2.5862 0.6077 0.5374 0.5704 0.6246
No log 10.95 154 2.4205 0.6805 0.6311 0.6549 0.6887
No log 11.95 168 2.2603 0.7816 0.7569 0.7691 0.7839
No log 12.95 182 2.1124 0.8366 0.8305 0.8335 0.8370
No log 13.95 196 1.9826 0.8691 0.8681 0.8686 0.8736
No log 14.95 210 1.8721 0.9210 0.92 0.9205 0.9240
No log 15.95 224 1.7779 0.9390 0.9392 0.9391 0.9417
No log 16.95 238 1.6986 0.9442 0.9452 0.9447 0.9466
No log 17.95 252 1.6294 0.9467 0.9476 0.9472 0.9486
No log 18.95 266 1.5667 0.9481 0.9493 0.9487 0.9499
No log 19.95 280 1.5073 0.9507 0.9522 0.9514 0.9523
No log 20.95 294 1.4499 0.9538 0.9550 0.9544 0.9552
No log 21.95 308 1.3926 0.9555 0.9563 0.9559 0.9563
No log 22.95 322 1.3373 0.9609 0.9614 0.9612 0.9612
No log 23.95 336 1.2815 0.9622 0.9624 0.9623 0.9623
No log 24.95 350 1.2246 0.9649 0.9648 0.9648 0.9646
No log 25.95 364 1.1682 0.9653 0.9652 0.9652 0.9648
No log 26.95 378 1.1114 0.9650 0.9659 0.9654 0.9661
No log 27.95 392 1.0521 0.9669 0.9675 0.9672 0.9699
No log 28.95 406 0.9950 0.9677 0.9679 0.9678 0.9707
No log 29.95 420 0.9364 0.9687 0.9690 0.9688 0.9716
No log 30.95 434 0.8800 0.9691 0.9693 0.9692 0.9721
No log 31.95 448 0.8233 0.9693 0.9698 0.9696 0.9726
No log 32.95 462 0.7679 0.9703 0.9703 0.9703 0.9733
No log 33.95 476 0.7146 0.9711 0.9711 0.9711 0.9737
No log 34.95 490 0.6641 0.9722 0.9724 0.9723 0.9750
2.0937 35.95 504 0.6187 0.9729 0.9729 0.9729 0.9755
2.0937 36.95 518 0.5834 0.9727 0.9732 0.9729 0.9756
2.0937 37.95 532 0.5605 0.9735 0.9739 0.9737 0.9762
2.0937 38.95 546 0.5466 0.9737 0.9742 0.9739 0.9765
2.0937 39.95 560 0.5416 0.9742 0.9745 0.9743 0.9768

Framework versions

  • Transformers 4.25.1
  • Pytorch 1.12.0
  • Datasets 2.18.0
  • Tokenizers 0.13.2