librarian-bot's picture
Librarian Bot: Add base_model information to model
16cf5da
|
raw
history blame
3.62 kB
metadata
language:
  - fr
license: mit
tags:
  - generated_from_trainer
datasets:
  - allocine
metrics:
  - accuracy
  - f1
  - precision
  - recall
widget:
  - text: Un film magnifique avec un duo d'acteurs excellent.
  - text: Grosse déception pour ce thriller qui peine à convaincre.
base_model: cmarkea/distilcamembert-base
model-index:
  - name: distilcamembert-allocine
    results:
      - task:
          type: text-classification
          name: Text Classification
        dataset:
          name: allocine
          type: allocine
          config: allocine
          split: validation
          args: allocine
        metrics:
          - type: accuracy
            value: 0.9714
            name: Accuracy
          - type: f1
            value: 0.9709909727152854
            name: F1
          - type: precision
            value: 0.9648256399919372
            name: Precision
          - type: recall
            value: 0.9772356063699469
            name: Recall

distilcamembert-allocine

This model is a fine-tuned version of cmarkea/distilcamembert-base on the allocine dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1066
  • Accuracy: 0.9714
  • F1: 0.9710
  • Precision: 0.9648
  • Recall: 0.9772

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 64
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • num_epochs: 3

Training results

Training Loss Epoch Step Validation Loss Accuracy F1 Precision Recall
0.1504 0.2 500 0.1290 0.9555 0.9542 0.9614 0.9470
0.1334 0.4 1000 0.1049 0.9624 0.9619 0.9536 0.9703
0.1158 0.6 1500 0.1052 0.963 0.9627 0.9498 0.9760
0.1153 0.8 2000 0.0949 0.9661 0.9653 0.9686 0.9620
0.1053 1.0 2500 0.0936 0.9666 0.9663 0.9542 0.9788
0.0755 1.2 3000 0.0987 0.97 0.9695 0.9644 0.9748
0.0716 1.4 3500 0.1078 0.9688 0.9684 0.9598 0.9772
0.0688 1.6 4000 0.1051 0.9673 0.9670 0.9552 0.9792
0.0691 1.8 4500 0.0940 0.9709 0.9704 0.9688 0.9720
0.0733 2.0 5000 0.1038 0.9686 0.9683 0.9558 0.9812
0.0476 2.2 5500 0.1066 0.9714 0.9710 0.9648 0.9772
0.047 2.4 6000 0.1098 0.9689 0.9686 0.9587 0.9788
0.0431 2.6 6500 0.1110 0.9711 0.9706 0.9666 0.9747
0.0464 2.8 7000 0.1149 0.9697 0.9694 0.9592 0.9798
0.0342 3.0 7500 0.1122 0.9703 0.9699 0.9621 0.9778

Framework versions

  • Transformers 4.26.1
  • Pytorch 1.13.1+cu117
  • Datasets 2.10.1
  • Tokenizers 0.13.2