Theoreticallyhugo commited on
Commit
082fc2d
1 Parent(s): 1c20d10

trainer: training complete at 2024-01-28 12:46:53.849983.

Browse files
Files changed (2) hide show
  1. README.md +12 -15
  2. model.safetensors +1 -1
README.md CHANGED
@@ -22,7 +22,7 @@ model-index:
22
  metrics:
23
  - name: Accuracy
24
  type: accuracy
25
- value: 0.8069798272958544
26
  ---
27
 
28
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -33,16 +33,13 @@ should probably proofread and complete it, then remove this comment. -->
33
  This model is a fine-tuned version of [allenai/longformer-base-4096](https://huggingface.co/allenai/longformer-base-4096) on the fancy_dataset dataset.
34
  It achieves the following results on the evaluation set:
35
  - Loss: 0.5164
36
- - B-claim: {'precision': 0.5, 'recall': 0.01444043321299639, 'f1-score': 0.028070175438596492, 'support': 277.0}
37
- - B-majorclaim: {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 141.0}
38
- - B-premise: {'precision': 0.6012121212121212, 'recall': 0.7737909516380655, 'f1-score': 0.6766712141882674, 'support': 641.0}
39
- - I-claim: {'precision': 0.5778401122019635, 'recall': 0.5050257416033341, 'f1-score': 0.5389848246991105, 'support': 4079.0}
40
- - I-majorclaim: {'precision': 0.6294978252273626, 'recall': 0.7800097991180793, 'f1-score': 0.6967177242888402, 'support': 2041.0}
41
- - I-premise: {'precision': 0.8417716308553552, 'recall': 0.8926233085988651, 'f1-score': 0.8664519955935939, 'support': 11455.0}
42
  - O: {'precision': 0.9219015280135824, 'recall': 0.8781671159029649, 'f1-score': 0.8995030369961348, 'support': 9275.0}
43
- - Accuracy: 0.8070
44
- - Macro avg: {'precision': 0.581746173930055, 'recall': 0.549151050010615, 'f1-score': 0.5294855673149347, 'support': 27909.0}
45
- - Weighted avg: {'precision': 0.8011330593153426, 'recall': 0.8069798272958544, 'f1-score': 0.8001053402048132, 'support': 27909.0}
 
46
 
47
  ## Model description
48
 
@@ -71,11 +68,11 @@ The following hyperparameters were used during training:
71
 
72
  ### Training results
73
 
74
- | Training Loss | Epoch | Step | Validation Loss | B-claim | B-majorclaim | B-premise | I-claim | I-majorclaim | I-premise | O | Accuracy | Macro avg | Weighted avg |
75
- |:-------------:|:-----:|:----:|:---------------:|:-----------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------------:|:--------:|:--------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|
76
- | No log | 1.0 | 41 | 0.7242 | {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 277.0} | {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 141.0} | {'precision': 0.8028169014084507, 'recall': 0.08892355694227769, 'f1-score': 0.16011235955056177, 'support': 641.0} | {'precision': 0.43010291595197253, 'recall': 0.24589360137288552, 'f1-score': 0.31289970363437836, 'support': 4079.0} | {'precision': 0.6835106382978723, 'recall': 0.1259186673199412, 'f1-score': 0.2126603227141084, 'support': 2041.0} | {'precision': 0.7517079419299744, 'recall': 0.9221300742034046, 'f1-score': 0.8282432273493551, 'support': 11455.0} | {'precision': 0.7629536017331648, 'recall': 0.9112668463611859, 'f1-score': 0.8305409521937798, 'support': 9275.0} | 0.7285 | {'precision': 0.49015599990306213, 'recall': 0.3277332494570993, 'f1-score': 0.3349223664917405, 'support': 27909.0} | {'precision': 0.6933695141932649, 'recall': 0.7285105163209, 'f1-score': 0.6809195289383426, 'support': 27909.0} |
77
- | No log | 2.0 | 82 | 0.5451 | {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 277.0} | {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 141.0} | {'precision': 0.638235294117647, 'recall': 0.6770670826833073, 'f1-score': 0.6570779712339135, 'support': 641.0} | {'precision': 0.5605615409729023, 'recall': 0.4209365040451091, 'f1-score': 0.48081769812377484, 'support': 4079.0} | {'precision': 0.6609442060085837, 'recall': 0.6036256736893679, 'f1-score': 0.6309859154929578, 'support': 2041.0} | {'precision': 0.8157935644333904, 'recall': 0.916281099956351, 'f1-score': 0.8631224045063938, 'support': 11455.0} | {'precision': 0.8817295464179737, 'recall': 0.8970350404312668, 'f1-score': 0.8893164448720005, 'support': 9275.0} | 0.7954 | {'precision': 0.5081805931357853, 'recall': 0.5021350572579146, 'f1-score': 0.5030457763184344, 'support': 27909.0} | {'precision': 0.7727823747619976, 'recall': 0.7954064996954388, 'f1-score': 0.7813164854898953, 'support': 27909.0} |
78
- | No log | 3.0 | 123 | 0.5164 | {'precision': 0.5, 'recall': 0.01444043321299639, 'f1-score': 0.028070175438596492, 'support': 277.0} | {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 141.0} | {'precision': 0.6012121212121212, 'recall': 0.7737909516380655, 'f1-score': 0.6766712141882674, 'support': 641.0} | {'precision': 0.5778401122019635, 'recall': 0.5050257416033341, 'f1-score': 0.5389848246991105, 'support': 4079.0} | {'precision': 0.6294978252273626, 'recall': 0.7800097991180793, 'f1-score': 0.6967177242888402, 'support': 2041.0} | {'precision': 0.8417716308553552, 'recall': 0.8926233085988651, 'f1-score': 0.8664519955935939, 'support': 11455.0} | {'precision': 0.9219015280135824, 'recall': 0.8781671159029649, 'f1-score': 0.8995030369961348, 'support': 9275.0} | 0.8070 | {'precision': 0.581746173930055, 'recall': 0.549151050010615, 'f1-score': 0.5294855673149347, 'support': 27909.0} | {'precision': 0.8011330593153426, 'recall': 0.8069798272958544, 'f1-score': 0.8001053402048132, 'support': 27909.0} |
79
 
80
 
81
  ### Framework versions
 
22
  metrics:
23
  - name: Accuracy
24
  type: accuracy
25
+ value: 0.8161524956107349
26
  ---
27
 
28
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
33
  This model is a fine-tuned version of [allenai/longformer-base-4096](https://huggingface.co/allenai/longformer-base-4096) on the fancy_dataset dataset.
34
  It achieves the following results on the evaluation set:
35
  - Loss: 0.5164
36
+ - Claim: {'precision': 0.5841029946823397, 'recall': 0.47910927456382, 'f1-score': 0.5264219952074664, 'support': 4356.0}
37
+ - Majorclaim: {'precision': 0.663898774219059, 'recall': 0.7694775435380385, 'f1-score': 0.7127998301846742, 'support': 2182.0}
 
 
 
 
38
  - O: {'precision': 0.9219015280135824, 'recall': 0.8781671159029649, 'f1-score': 0.8995030369961348, 'support': 9275.0}
39
+ - Premise: {'precision': 0.8377274128893001, 'recall': 0.8983961640211641, 'f1-score': 0.8670017552257859, 'support': 12096.0}
40
+ - Accuracy: 0.8162
41
+ - Macro avg: {'precision': 0.7519076774510703, 'recall': 0.7562875245064969, 'f1-score': 0.7514316544035153, 'support': 27909.0}
42
+ - Weighted avg: {'precision': 0.8125252509519227, 'recall': 0.8161524956107349, 'f1-score': 0.8125897502575133, 'support': 27909.0}
43
 
44
  ## Model description
45
 
 
68
 
69
  ### Training results
70
 
71
+ | Training Loss | Epoch | Step | Validation Loss | Claim | Majorclaim | O | Premise | Accuracy | Macro avg | Weighted avg |
72
+ |:-------------:|:-----:|:----:|:---------------:|:--------------------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|:--------:|:-------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|
73
+ | No log | 1.0 | 41 | 0.7242 | {'precision': 0.4451114922813036, 'recall': 0.23829201101928374, 'f1-score': 0.3104066985645933, 'support': 4356.0} | {'precision': 0.6888297872340425, 'recall': 0.11869844179651695, 'f1-score': 0.20250195465207194, 'support': 2182.0} | {'precision': 0.7629536017331648, 'recall': 0.9112668463611859, 'f1-score': 0.8305409521937798, 'support': 9275.0} | {'precision': 0.7774552148976847, 'recall': 0.9077380952380952, 'f1-score': 0.83756054769442, 'support': 12096.0} | 0.7427 | {'precision': 0.6685875240365489, 'recall': 0.5439988486037705, 'f1-score': 0.5452525382762162, 'support': 27909.0} | {'precision': 0.7138351496506337, 'recall': 0.7427353183560859, 'f1-score': 0.7032996725252499, 'support': 27909.0} |
74
+ | No log | 2.0 | 82 | 0.5451 | {'precision': 0.5706823375775384, 'recall': 0.40128558310376494, 'f1-score': 0.47122253673001757, 'support': 4356.0} | {'precision': 0.6872317596566524, 'recall': 0.5870760769935839, 'f1-score': 0.6332179930795848, 'support': 2182.0} | {'precision': 0.8817295464179737, 'recall': 0.8970350404312668, 'f1-score': 0.8893164448720005, 'support': 9275.0} | {'precision': 0.819134799940942, 'recall': 0.9173280423280423, 'f1-score': 0.8654551127057172, 'support': 12096.0} | 0.8042 | {'precision': 0.7396946108982767, 'recall': 0.7006811857141645, 'f1-score': 0.71480302184683, 'support': 27909.0} | {'precision': 0.7908462519320261, 'recall': 0.8042208606542692, 'f1-score': 0.7936967322502336, 'support': 27909.0} |
75
+ | No log | 3.0 | 123 | 0.5164 | {'precision': 0.5841029946823397, 'recall': 0.47910927456382, 'f1-score': 0.5264219952074664, 'support': 4356.0} | {'precision': 0.663898774219059, 'recall': 0.7694775435380385, 'f1-score': 0.7127998301846742, 'support': 2182.0} | {'precision': 0.9219015280135824, 'recall': 0.8781671159029649, 'f1-score': 0.8995030369961348, 'support': 9275.0} | {'precision': 0.8377274128893001, 'recall': 0.8983961640211641, 'f1-score': 0.8670017552257859, 'support': 12096.0} | 0.8162 | {'precision': 0.7519076774510703, 'recall': 0.7562875245064969, 'f1-score': 0.7514316544035153, 'support': 27909.0} | {'precision': 0.8125252509519227, 'recall': 0.8161524956107349, 'f1-score': 0.8125897502575133, 'support': 27909.0} |
76
 
77
 
78
  ### Framework versions
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:14c2d06d02b185a35473778fa08a2f8ca3f7cd025ea1700172a851bbd799ba49
3
  size 592330980
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dd0b152b04957f2b0f66ca6789a614173447837debbcebedc2eb929341de7383
3
  size 592330980