Edit model card

camembert_ccnet_classification_tools_classifier-only_fr_lr1e-3_V3

This model is a fine-tuned version of camembert/camembert-base-ccnet on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1260
  • Accuracy: 0.9524
  • Learning Rate: 0.0

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.001
  • train_batch_size: 24
  • eval_batch_size: 192
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 60

Training results

Training Loss Epoch Step Validation Loss Accuracy Rate
1.7785 1.0 14 1.5198 0.4643 0.0010
1.3586 2.0 28 1.0475 0.7381 0.0010
1.0682 3.0 42 0.7517 0.7976 0.0009
0.7986 4.0 56 0.7405 0.7262 0.0009
0.6711 5.0 70 0.6039 0.7976 0.0009
0.6062 6.0 84 0.4750 0.8333 0.0009
0.5263 7.0 98 0.3627 0.8929 0.0009
0.4188 8.0 112 0.3923 0.8452 0.0009
0.4206 9.0 126 0.3147 0.9048 0.0008
0.5178 10.0 140 0.3345 0.8571 0.0008
0.3435 11.0 154 0.3869 0.8095 0.0008
0.3486 12.0 168 0.2324 0.9405 0.0008
0.3507 13.0 182 0.2324 0.9286 0.0008
0.379 14.0 196 0.2336 0.9048 0.0008
0.3516 15.0 210 0.3526 0.8571 0.0008
0.3349 16.0 224 0.2204 0.9286 0.0007
0.2979 17.0 238 0.2769 0.9167 0.0007
0.2981 18.0 252 0.2374 0.9048 0.0007
0.2902 19.0 266 0.2410 0.9405 0.0007
0.3779 20.0 280 0.2106 0.9167 0.0007
0.2486 21.0 294 0.2172 0.9405 0.0007
0.2773 22.0 308 0.1927 0.9286 0.0006
0.2685 23.0 322 0.1876 0.9524 0.0006
0.2416 24.0 336 0.1924 0.9286 0.0006
0.2369 25.0 350 0.1686 0.9405 0.0006
0.2334 26.0 364 0.2043 0.9048 0.0006
0.223 27.0 378 0.1836 0.9405 0.0006
0.3389 28.0 392 0.2298 0.9167 0.0005
0.2863 29.0 406 0.2005 0.9167 0.0005
0.2573 30.0 420 0.1696 0.9405 0.0005
0.2192 31.0 434 0.1853 0.9286 0.0005
0.2388 32.0 448 0.1546 0.9286 0.0005
0.2461 33.0 462 0.1649 0.9286 0.0005
0.303 34.0 476 0.1588 0.9405 0.0004
0.2262 35.0 490 0.1524 0.9405 0.0004
0.3037 36.0 504 0.1469 0.9405 0.0004
0.2268 37.0 518 0.1387 0.9524 0.0004
0.2315 38.0 532 0.1896 0.9405 0.0004
0.2247 39.0 546 0.1572 0.9524 0.0003
0.1841 40.0 560 0.1512 0.9524 0.0003
0.2357 41.0 574 0.1501 0.9405 0.0003
0.2186 42.0 588 0.1642 0.9286 0.0003
0.2437 43.0 602 0.1438 0.9405 0.0003
0.2399 44.0 616 0.1835 0.9405 0.0003
0.2589 45.0 630 0.1565 0.9524 0.0003
0.2306 46.0 644 0.1868 0.9286 0.0002
0.2159 47.0 658 0.1369 0.9524 0.0002
0.212 48.0 672 0.1238 0.9524 0.0002
0.1755 49.0 686 0.1439 0.9524 0.0002
0.2242 50.0 700 0.1324 0.9524 0.0002
0.2211 51.0 714 0.1277 0.9524 0.0001
0.1589 52.0 728 0.1268 0.9405 0.0001
0.2339 53.0 742 0.1248 0.9524 0.0001
0.1963 54.0 756 0.1332 0.9524 0.0001
0.2195 55.0 770 0.1350 0.9524 0.0001
0.1619 56.0 784 0.1246 0.9524 0.0001
0.2054 57.0 798 0.1282 0.9524 5e-05
0.206 58.0 812 0.1243 0.9524 0.0000
0.188 59.0 826 0.1260 0.9524 0.0000
0.1891 60.0 840 0.1260 0.9524 0.0

Framework versions

  • Transformers 4.34.0
  • Pytorch 2.0.1+cu117
  • Datasets 2.14.5
  • Tokenizers 0.14.1
Downloads last month
10
Inference API
This model can be loaded on Inference API (serverless).

Finetuned from