has-abi commited on
Commit
54434c0
1 Parent(s): ad5fb28

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +37 -31
README.md CHANGED
@@ -15,20 +15,26 @@ should probably proofread and complete it, then remove this comment. -->
15
 
16
  # distilBERT-finetuned-resumes-sections
17
 
18
- This model is a fine-tuned version of [Geotrend/distilbert-base-en-fr-cased](https://huggingface.co/Geotrend/distilbert-base-en-fr-cased) on a private resume sections dataset.
19
  It achieves the following results on the evaluation set:
20
- - Loss: 0.0487
21
- - F1: 0.9512
22
- - Roc Auc: 0.9729
23
- - Accuracy: 0.9482
24
 
25
  ## Model description
26
 
27
- This model classifies a resume section into 12 classes.
28
 
29
- ### Possible classes for a resume section
30
 
31
- **awards**, **certificates**, **contact/name/title**, **education**, **interests**, **languages**, **para**, **professional_experiences**, **projects**, **skills**, **soft_skills**, **summary**.
 
 
 
 
 
 
32
 
33
  ### Training hyperparameters
34
 
@@ -45,31 +51,31 @@ The following hyperparameters were used during training:
45
 
46
  | Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
47
  |:-------------:|:-----:|:-----:|:---------------:|:------:|:-------:|:--------:|
48
- | 0.058 | 1.0 | 1083 | 0.0457 | 0.9186 | 0.9494 | 0.9020 |
49
- | 0.0277 | 2.0 | 2166 | 0.0393 | 0.9327 | 0.9614 | 0.9251 |
50
- | 0.0154 | 3.0 | 3249 | 0.0333 | 0.9425 | 0.9671 | 0.9367 |
51
- | 0.0104 | 4.0 | 4332 | 0.0408 | 0.9357 | 0.9645 | 0.9293 |
52
- | 0.0084 | 5.0 | 5415 | 0.0405 | 0.9376 | 0.9643 | 0.9298 |
53
- | 0.0065 | 6.0 | 6498 | 0.0419 | 0.9439 | 0.9699 | 0.9385 |
54
- | 0.0051 | 7.0 | 7581 | 0.0450 | 0.9412 | 0.9674 | 0.9376 |
55
- | 0.0034 | 8.0 | 8664 | 0.0406 | 0.9433 | 0.9684 | 0.9372 |
56
- | 0.0035 | 9.0 | 9747 | 0.0441 | 0.9403 | 0.9664 | 0.9358 |
57
- | 0.0024 | 10.0 | 10830 | 0.0492 | 0.9419 | 0.9678 | 0.9367 |
58
- | 0.0026 | 11.0 | 11913 | 0.0470 | 0.9468 | 0.9708 | 0.9436 |
59
- | 0.0022 | 12.0 | 12996 | 0.0514 | 0.9424 | 0.9679 | 0.9395 |
60
- | 0.0013 | 13.0 | 14079 | 0.0458 | 0.9478 | 0.9715 | 0.9441 |
61
- | 0.0019 | 14.0 | 15162 | 0.0494 | 0.9477 | 0.9711 | 0.9450 |
62
- | 0.0007 | 15.0 | 16245 | 0.0492 | 0.9496 | 0.9719 | 0.9464 |
63
- | 0.0009 | 16.0 | 17328 | 0.0487 | 0.9512 | 0.9729 | 0.9482 |
64
- | 0.001 | 17.0 | 18411 | 0.0510 | 0.9480 | 0.9711 | 0.9441 |
65
- | 0.0006 | 18.0 | 19494 | 0.0532 | 0.9477 | 0.9709 | 0.9441 |
66
- | 0.0007 | 19.0 | 20577 | 0.0511 | 0.9487 | 0.9720 | 0.9445 |
67
- | 0.0005 | 20.0 | 21660 | 0.0522 | 0.9471 | 0.9710 | 0.9436 |
68
 
69
 
70
  ### Framework versions
71
 
72
- - Transformers 4.20.1
73
- - Pytorch 1.12.0+cu113
74
- - Datasets 2.3.2
75
  - Tokenizers 0.12.1
 
15
 
16
  # distilBERT-finetuned-resumes-sections
17
 
18
+ This model is a fine-tuned version of [Geotrend/distilbert-base-en-fr-cased](https://huggingface.co/Geotrend/distilbert-base-en-fr-cased) on an unknown dataset.
19
  It achieves the following results on the evaluation set:
20
+ - Loss: 0.0450
21
+ - F1: 0.9585
22
+ - Roc Auc: 0.9774
23
+ - Accuracy: 0.9557
24
 
25
  ## Model description
26
 
27
+ More information needed
28
 
29
+ ## Intended uses & limitations
30
 
31
+ More information needed
32
+
33
+ ## Training and evaluation data
34
+
35
+ More information needed
36
+
37
+ ## Training procedure
38
 
39
  ### Training hyperparameters
40
 
 
51
 
52
  | Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
53
  |:-------------:|:-----:|:-----:|:---------------:|:------:|:-------:|:--------:|
54
+ | 0.0518 | 1.0 | 1174 | 0.0368 | 0.9406 | 0.9635 | 0.9302 |
55
+ | 0.0251 | 2.0 | 2348 | 0.0346 | 0.9375 | 0.9653 | 0.9289 |
56
+ | 0.0136 | 3.0 | 3522 | 0.0343 | 0.9475 | 0.9707 | 0.9425 |
57
+ | 0.0096 | 4.0 | 4696 | 0.0326 | 0.9539 | 0.9737 | 0.9468 |
58
+ | 0.007 | 5.0 | 5870 | 0.0357 | 0.9521 | 0.9740 | 0.9480 |
59
+ | 0.007 | 6.0 | 7044 | 0.0389 | 0.9509 | 0.9725 | 0.9472 |
60
+ | 0.0034 | 7.0 | 8218 | 0.0403 | 0.9532 | 0.9746 | 0.9510 |
61
+ | 0.0033 | 8.0 | 9392 | 0.0422 | 0.9493 | 0.9722 | 0.9468 |
62
+ | 0.0024 | 9.0 | 10566 | 0.0425 | 0.9512 | 0.9733 | 0.9485 |
63
+ | 0.0023 | 10.0 | 11740 | 0.0431 | 0.9537 | 0.9743 | 0.9502 |
64
+ | 0.0019 | 11.0 | 12914 | 0.0457 | 0.9501 | 0.9719 | 0.9463 |
65
+ | 0.002 | 12.0 | 14088 | 0.0428 | 0.9560 | 0.9751 | 0.9536 |
66
+ | 0.0012 | 13.0 | 15262 | 0.0435 | 0.9569 | 0.9761 | 0.9553 |
67
+ | 0.001 | 14.0 | 16436 | 0.0464 | 0.9565 | 0.9759 | 0.9544 |
68
+ | 0.001 | 15.0 | 17610 | 0.0460 | 0.9574 | 0.9766 | 0.9549 |
69
+ | 0.0007 | 16.0 | 18784 | 0.0450 | 0.9585 | 0.9774 | 0.9557 |
70
+ | 0.0003 | 17.0 | 19958 | 0.0481 | 0.9572 | 0.9764 | 0.9553 |
71
+ | 0.0005 | 18.0 | 21132 | 0.0478 | 0.9576 | 0.9764 | 0.9557 |
72
+ | 0.0005 | 19.0 | 22306 | 0.0483 | 0.9574 | 0.9766 | 0.9553 |
73
+ | 0.0005 | 20.0 | 23480 | 0.0481 | 0.9576 | 0.9766 | 0.9557 |
74
 
75
 
76
  ### Framework versions
77
 
78
+ - Transformers 4.21.1
79
+ - Pytorch 1.12.1+cu113
80
+ - Datasets 2.4.0
81
  - Tokenizers 0.12.1