File size: 3,540 Bytes
aee3262
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
---
language:
- fr
license: mit
tags:
- generated_from_trainer
datasets:
- allocine
widget:
- text: "Un film magnifique avec un duo d'acteurs excellent."
- text: "Grosse déception pour ce thriller qui peine à convaincre."
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: camembert-allocine
  results:
  - task:
      name: Text Classification
      type: text-classification
    dataset:
      name: allocine
      type: allocine
      config: allocine
      split: validation
      args: allocine
    metrics:
    - name: Accuracy
      type: accuracy
      value: 0.97535
    - name: F1
      type: f1
      value: 0.9749045558666326
    - name: Precision
      type: precision
      value: 0.9722814498933902
    - name: Recall
      type: recall
      value: 0.9775418538178848
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# camembert-allocine

This model is a fine-tuned version of [camembert-base](https://huggingface.co/camembert-base) on the allocine dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0928
- Accuracy: 0.9754
- F1: 0.9749
- Precision: 0.9723
- Recall: 0.9775

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3

### Training results

| Training Loss | Epoch | Step  | Validation Loss | Accuracy |   F1   | Precision | Recall |
| :-----------: | :---: | :---: | :-------------: | :------: | :----: | :-------: | :----: |
|    0.1276     |  0.2  |  500  |     0.1187      |  0.9623  | 0.9622 |  0.9462   | 0.9787 |
|    0.1013     |  0.4  | 1000  |     0.0917      |  0.9683  | 0.9675 |  0.9725   | 0.9625 |
|    0.1254     |  0.6  | 1500  |     0.0889      |  0.9701  | 0.9698 |  0.9597   | 0.9801 |
|    0.1004     |  0.8  | 2000  |     0.0792      |  0.9716  | 0.9709 |  0.9727   | 0.9691 |
|    0.1149     |  1.0  | 2500  |     0.0762      |  0.9727  | 0.9723 |  0.9673   | 0.9773 |
|    0.0574     |  1.2  | 3000  |     0.0849      |  0.9733  | 0.9729 |  0.9679   | 0.9780 |
|    0.0394     |  1.4  | 3500  |     0.1026      |  0.9718  | 0.9715 |  0.9595   | 0.9839 |
|    0.0401     |  1.6  | 4000  |     0.1065      |  0.9698  | 0.9697 |  0.9528   | 0.9872 |
|    0.0458     |  1.8  | 4500  |     0.0834      |  0.9744  | 0.9739 |  0.9715   | 0.9764 |
|    0.0554     |  2.0  | 5000  |     0.0873      |  0.9719  | 0.9717 |  0.9594   | 0.9844 |
|    0.0516     |  2.2  | 5500  |     0.0928      |  0.9754  | 0.9749 |  0.9723   | 0.9775 |
|    0.0355     |  2.4  | 6000  |     0.1017      |  0.9744  | 0.9741 |  0.9642   | 0.9842 |
|    0.0227     |  2.6  | 6500  |     0.0983      |  0.9748  | 0.9743 |  0.9729   | 0.9757 |
|    0.0359     |  2.8  | 7000  |     0.0990      |  0.9747  | 0.9743 |  0.9665   | 0.9823 |
|    0.0384     |  3.0  | 7500  |     0.1001      |  0.9746  | 0.9742 |  0.9662   | 0.9824 |


### Framework versions

- Transformers 4.26.1
- Pytorch 1.13.1+cu117
- Datasets 2.10.1
- Tokenizers 0.13.2