javicorvi commited on
Commit
8f61abf
·
1 Parent(s): ebffdbd

javicorvi/pretoxtm-ner

Browse files
README.md CHANGED
@@ -14,15 +14,15 @@ should probably proofread and complete it, then remove this comment. -->
14
 
15
  This model is a fine-tuned version of [dmis-lab/biobert-v1.1](https://huggingface.co/dmis-lab/biobert-v1.1) on an unknown dataset.
16
  It achieves the following results on the evaluation set:
17
- - Loss: 0.1892
18
- - Study Test: {'precision': 0.7680288461538461, 'recall': 0.8887343532684284, 'f1': 0.8239845261121858, 'number': 719}
19
- - Manifestation: {'precision': 0.8245125348189415, 'recall': 0.8996960486322189, 'f1': 0.8604651162790697, 'number': 329}
20
- - Finding: {'precision': 0.7699175824175825, 'recall': 0.7844646606018194, 'f1': 0.7771230502599653, 'number': 1429}
21
- - Specimen: {'precision': 0.8063660477453581, 'recall': 0.8386206896551724, 'f1': 0.8221771467207573, 'number': 725}
22
- - Dose: {'precision': 0.8887043189368771, 'recall': 0.9385964912280702, 'f1': 0.9129692832764505, 'number': 570}
23
- - Dose Qualification: {'precision': 0.7419354838709677, 'recall': 0.8070175438596491, 'f1': 0.7731092436974789, 'number': 57}
24
- - Sex: {'precision': 0.9405940594059405, 'recall': 0.9405940594059405, 'f1': 0.9405940594059405, 'number': 202}
25
- - Group: {'precision': 0.647887323943662, 'recall': 0.8214285714285714, 'f1': 0.7244094488188976, 'number': 112}
26
 
27
  ## Model description
28
 
@@ -51,11 +51,11 @@ The following hyperparameters were used during training:
51
 
52
  ### Training results
53
 
54
- | Training Loss | Epoch | Step | Validation Loss | Study Test | Manifestation | Finding | Specimen | Dose | Dose Qualification | Sex | Group |
55
- |:-------------:|:-----:|:----:|:---------------:|:--------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|
56
- | No log | 1.0 | 257 | 0.2128 | {'precision': 0.7071759259259259, 'recall': 0.8497913769123783, 'f1': 0.7719519898926089, 'number': 719} | {'precision': 0.7883597883597884, 'recall': 0.9057750759878419, 'f1': 0.842998585572843, 'number': 329} | {'precision': 0.7293080054274084, 'recall': 0.7522743177046886, 'f1': 0.74061315880124, 'number': 1429} | {'precision': 0.7443609022556391, 'recall': 0.8193103448275862, 'f1': 0.7800393959290873, 'number': 725} | {'precision': 0.7741433021806854, 'recall': 0.8719298245614036, 'f1': 0.8201320132013201, 'number': 570} | {'precision': 0.7272727272727273, 'recall': 0.5614035087719298, 'f1': 0.6336633663366337, 'number': 57} | {'precision': 0.9215686274509803, 'recall': 0.9306930693069307, 'f1': 0.9261083743842364, 'number': 202} | {'precision': 0.5673758865248227, 'recall': 0.7142857142857143, 'f1': 0.6324110671936759, 'number': 112} |
57
- | 0.2683 | 2.0 | 514 | 0.1918 | {'precision': 0.7720144752714113, 'recall': 0.8901251738525731, 'f1': 0.82687338501292, 'number': 719} | {'precision': 0.803763440860215, 'recall': 0.9088145896656535, 'f1': 0.8530670470756062, 'number': 329} | {'precision': 0.7732474964234621, 'recall': 0.7564730580825753, 'f1': 0.7647683056243367, 'number': 1429} | {'precision': 0.7907894736842105, 'recall': 0.8289655172413793, 'f1': 0.8094276094276094, 'number': 725} | {'precision': 0.8778877887788779, 'recall': 0.9333333333333333, 'f1': 0.9047619047619047, 'number': 570} | {'precision': 0.746031746031746, 'recall': 0.8245614035087719, 'f1': 0.7833333333333334, 'number': 57} | {'precision': 0.945, 'recall': 0.9356435643564357, 'f1': 0.9402985074626865, 'number': 202} | {'precision': 0.6298701298701299, 'recall': 0.8660714285714286, 'f1': 0.7293233082706767, 'number': 112} |
58
- | 0.2683 | 3.0 | 771 | 0.1892 | {'precision': 0.7680288461538461, 'recall': 0.8887343532684284, 'f1': 0.8239845261121858, 'number': 719} | {'precision': 0.8245125348189415, 'recall': 0.8996960486322189, 'f1': 0.8604651162790697, 'number': 329} | {'precision': 0.7699175824175825, 'recall': 0.7844646606018194, 'f1': 0.7771230502599653, 'number': 1429} | {'precision': 0.8063660477453581, 'recall': 0.8386206896551724, 'f1': 0.8221771467207573, 'number': 725} | {'precision': 0.8887043189368771, 'recall': 0.9385964912280702, 'f1': 0.9129692832764505, 'number': 570} | {'precision': 0.7419354838709677, 'recall': 0.8070175438596491, 'f1': 0.7731092436974789, 'number': 57} | {'precision': 0.9405940594059405, 'recall': 0.9405940594059405, 'f1': 0.9405940594059405, 'number': 202} | {'precision': 0.647887323943662, 'recall': 0.8214285714285714, 'f1': 0.7244094488188976, 'number': 112} |
59
 
60
 
61
  ### Framework versions
 
14
 
15
  This model is a fine-tuned version of [dmis-lab/biobert-v1.1](https://huggingface.co/dmis-lab/biobert-v1.1) on an unknown dataset.
16
  It achieves the following results on the evaluation set:
17
+ - Loss: 0.1823
18
+ - Study Test: {'precision': 0.7779111644657863, 'recall': 0.9012517385257302, 'f1': 0.8350515463917525, 'number': 719}
19
+ - Manifestation: {'precision': 0.8337950138504155, 'recall': 0.9148936170212766, 'f1': 0.872463768115942, 'number': 329}
20
+ - Finding: {'precision': 0.7869535045107564, 'recall': 0.7935619314205739, 'f1': 0.7902439024390244, 'number': 1429}
21
+ - Specimen: {'precision': 0.7981530343007915, 'recall': 0.8344827586206897, 'f1': 0.8159136884693189, 'number': 725}
22
+ - Dose: {'precision': 0.8948247078464107, 'recall': 0.9403508771929825, 'f1': 0.9170230966638152, 'number': 570}
23
+ - Dose Qualification: {'precision': 0.696969696969697, 'recall': 0.8070175438596491, 'f1': 0.7479674796747967, 'number': 57}
24
+ - Sex: {'precision': 0.945, 'recall': 0.9356435643564357, 'f1': 0.9402985074626865, 'number': 202}
25
+ - Group: {'precision': 0.6666666666666666, 'recall': 0.8571428571428571, 'f1': 0.75, 'number': 112}
26
 
27
  ## Model description
28
 
 
51
 
52
  ### Training results
53
 
54
+ | Training Loss | Epoch | Step | Validation Loss | Study Test | Manifestation | Finding | Specimen | Dose | Dose Qualification | Sex | Group |
55
+ |:-------------:|:-----:|:----:|:---------------:|:--------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|
56
+ | No log | 1.0 | 257 | 0.2018 | {'precision': 0.7382075471698113, 'recall': 0.8706536856745479, 'f1': 0.7989789406509252, 'number': 719} | {'precision': 0.8271954674220963, 'recall': 0.8875379939209727, 'f1': 0.8563049853372434, 'number': 329} | {'precision': 0.7308461025982678, 'recall': 0.7676696990902729, 'f1': 0.7488054607508532, 'number': 1429} | {'precision': 0.7324290998766955, 'recall': 0.8193103448275862, 'f1': 0.7734375, 'number': 725} | {'precision': 0.8511705685618729, 'recall': 0.8929824561403509, 'f1': 0.8715753424657534, 'number': 570} | {'precision': 0.5555555555555556, 'recall': 0.43859649122807015, 'f1': 0.4901960784313725, 'number': 57} | {'precision': 0.9285714285714286, 'recall': 0.900990099009901, 'f1': 0.914572864321608, 'number': 202} | {'precision': 0.5214723926380368, 'recall': 0.7589285714285714, 'f1': 0.6181818181818183, 'number': 112} |
57
+ | 0.2702 | 2.0 | 514 | 0.1916 | {'precision': 0.7688622754491018, 'recall': 0.8929068150208623, 'f1': 0.8262548262548264, 'number': 719} | {'precision': 0.7885117493472585, 'recall': 0.9179331306990881, 'f1': 0.848314606741573, 'number': 329} | {'precision': 0.7945707997065297, 'recall': 0.7578726382085375, 'f1': 0.7757879656160458, 'number': 1429} | {'precision': 0.7962466487935657, 'recall': 0.8193103448275862, 'f1': 0.8076138681169274, 'number': 725} | {'precision': 0.867430441898527, 'recall': 0.9298245614035088, 'f1': 0.8975444538526672, 'number': 570} | {'precision': 0.71875, 'recall': 0.8070175438596491, 'f1': 0.7603305785123967, 'number': 57} | {'precision': 0.9633507853403142, 'recall': 0.9108910891089109, 'f1': 0.9363867684478371, 'number': 202} | {'precision': 0.6217948717948718, 'recall': 0.8660714285714286, 'f1': 0.7238805970149254, 'number': 112} |
58
+ | 0.2702 | 3.0 | 771 | 0.1823 | {'precision': 0.7779111644657863, 'recall': 0.9012517385257302, 'f1': 0.8350515463917525, 'number': 719} | {'precision': 0.8337950138504155, 'recall': 0.9148936170212766, 'f1': 0.872463768115942, 'number': 329} | {'precision': 0.7869535045107564, 'recall': 0.7935619314205739, 'f1': 0.7902439024390244, 'number': 1429} | {'precision': 0.7981530343007915, 'recall': 0.8344827586206897, 'f1': 0.8159136884693189, 'number': 725} | {'precision': 0.8948247078464107, 'recall': 0.9403508771929825, 'f1': 0.9170230966638152, 'number': 570} | {'precision': 0.696969696969697, 'recall': 0.8070175438596491, 'f1': 0.7479674796747967, 'number': 57} | {'precision': 0.945, 'recall': 0.9356435643564357, 'f1': 0.9402985074626865, 'number': 202} | {'precision': 0.6666666666666666, 'recall': 0.8571428571428571, 'f1': 0.75, 'number': 112} |
59
 
60
 
61
  ### Framework versions
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:66fddf6e6b9566751e924cea133c1eb69bfbeb4b592fdd68c4ce7a12c45c6922
3
  size 430954348
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8c2a4b6f530a6f0bc2aa319246c7a437b4e1cbfcd2aa0065c910ca38347aeba2
3
  size 430954348
runs/Dec14_12-56-42_a2e3dc378379/events.out.tfevents.1702558602.a2e3dc378379.314.1 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0e057baa2926ea1d5c240ca594fab6001f6964024a97d344bf984444305dcf9e
3
+ size 6351
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:64aad2cb95f982a10ed814caf6eaa6ed663766f0862118816a8410cab546e991
3
  size 4536
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9dcf50d238ab57bc197c55488c52792927aa76f64c524479a0e045ce65920c76
3
  size 4536