mousaazari commited on
Commit
e6c1792
·
1 Parent(s): 508ba3b

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +31 -31
README.md CHANGED
@@ -15,9 +15,9 @@ should probably proofread and complete it, then remove this comment. -->
15
  This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the None dataset.
16
  It achieves the following results on the evaluation set:
17
  - Loss: 0.1611
18
- - Rouge2 Precision: 0.8651
19
- - Rouge2 Recall: 0.2594
20
- - Rouge2 Fmeasure: 0.3671
21
 
22
  ## Model description
23
 
@@ -51,38 +51,38 @@ The following hyperparameters were used during training:
51
  | No log | 1.0 | 11 | 1.8867 | 0.0 | 0.0 | 0.0 |
52
  | No log | 2.0 | 22 | 0.9658 | 0.0119 | 0.0015 | 0.0027 |
53
  | No log | 3.0 | 33 | 0.6477 | 0.0468 | 0.0078 | 0.0135 |
54
- | No log | 4.0 | 44 | 0.4617 | 0.4211 | 0.1387 | 0.1938 |
55
- | No log | 5.0 | 55 | 0.3669 | 0.6388 | 0.2069 | 0.2927 |
56
- | No log | 6.0 | 66 | 0.3084 | 0.7073 | 0.2407 | 0.3357 |
57
- | No log | 7.0 | 77 | 0.2788 | 0.727 | 0.2232 | 0.3153 |
58
- | No log | 8.0 | 88 | 0.2549 | 0.7594 | 0.2343 | 0.3305 |
59
- | No log | 9.0 | 99 | 0.2368 | 0.7733 | 0.2363 | 0.3337 |
60
- | No log | 10.0 | 110 | 0.2322 | 0.7889 | 0.2393 | 0.3381 |
61
- | No log | 11.0 | 121 | 0.2151 | 0.806 | 0.2423 | 0.3436 |
62
- | No log | 12.0 | 132 | 0.2067 | 0.7995 | 0.2365 | 0.3359 |
63
- | No log | 13.0 | 143 | 0.2003 | 0.7955 | 0.235 | 0.3342 |
64
- | No log | 14.0 | 154 | 0.1899 | 0.823 | 0.2422 | 0.3461 |
65
- | No log | 15.0 | 165 | 0.1869 | 0.833 | 0.2438 | 0.3494 |
66
- | No log | 16.0 | 176 | 0.1826 | 0.833 | 0.2438 | 0.3494 |
67
- | No log | 17.0 | 187 | 0.1797 | 0.8247 | 0.2421 | 0.3468 |
68
- | No log | 18.0 | 198 | 0.1749 | 0.8333 | 0.2449 | 0.3509 |
69
- | No log | 19.0 | 209 | 0.1726 | 0.8373 | 0.2478 | 0.3536 |
70
- | No log | 20.0 | 220 | 0.1716 | 0.8373 | 0.2451 | 0.3518 |
71
- | No log | 21.0 | 231 | 0.1695 | 0.8472 | 0.2467 | 0.3542 |
72
- | No log | 22.0 | 242 | 0.1693 | 0.8452 | 0.249 | 0.357 |
73
- | No log | 23.0 | 253 | 0.1685 | 0.875 | 0.2685 | 0.3784 |
74
- | No log | 24.0 | 264 | 0.1668 | 0.8552 | 0.2587 | 0.3644 |
75
- | No log | 25.0 | 275 | 0.1641 | 0.8571 | 0.2492 | 0.357 |
76
- | No log | 26.0 | 286 | 0.1628 | 0.869 | 0.2602 | 0.3687 |
77
- | No log | 27.0 | 297 | 0.1617 | 0.8651 | 0.2594 | 0.3671 |
78
- | No log | 28.0 | 308 | 0.1611 | 0.8651 | 0.2594 | 0.3671 |
79
- | No log | 29.0 | 319 | 0.1611 | 0.8651 | 0.2594 | 0.3671 |
80
- | No log | 30.0 | 330 | 0.1611 | 0.8651 | 0.2594 | 0.3671 |
81
 
82
 
83
  ### Framework versions
84
 
85
- - Transformers 4.21.2
86
  - Pytorch 1.12.1+cu113
87
  - Datasets 2.4.0
88
  - Tokenizers 0.12.1
 
15
  This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the None dataset.
16
  It achieves the following results on the evaluation set:
17
  - Loss: 0.1611
18
+ - Rouge2 Precision: 0.8631
19
+ - Rouge2 Recall: 0.2595
20
+ - Rouge2 Fmeasure: 0.3674
21
 
22
  ## Model description
23
 
 
51
  | No log | 1.0 | 11 | 1.8867 | 0.0 | 0.0 | 0.0 |
52
  | No log | 2.0 | 22 | 0.9658 | 0.0119 | 0.0015 | 0.0027 |
53
  | No log | 3.0 | 33 | 0.6477 | 0.0468 | 0.0078 | 0.0135 |
54
+ | No log | 4.0 | 44 | 0.4617 | 0.4251 | 0.14 | 0.1943 |
55
+ | No log | 5.0 | 55 | 0.3669 | 0.6403 | 0.2091 | 0.2937 |
56
+ | No log | 6.0 | 66 | 0.3084 | 0.7085 | 0.2446 | 0.3393 |
57
+ | No log | 7.0 | 77 | 0.2788 | 0.7282 | 0.2246 | 0.3175 |
58
+ | No log | 8.0 | 88 | 0.2549 | 0.7593 | 0.2346 | 0.332 |
59
+ | No log | 9.0 | 99 | 0.2368 | 0.7738 | 0.2367 | 0.3348 |
60
+ | No log | 10.0 | 110 | 0.2322 | 0.7889 | 0.2388 | 0.3393 |
61
+ | No log | 11.0 | 121 | 0.2151 | 0.8056 | 0.2419 | 0.3452 |
62
+ | No log | 12.0 | 132 | 0.2067 | 0.7996 | 0.2371 | 0.3382 |
63
+ | No log | 13.0 | 143 | 0.2003 | 0.7943 | 0.2365 | 0.3364 |
64
+ | No log | 14.0 | 154 | 0.1899 | 0.8204 | 0.244 | 0.3477 |
65
+ | No log | 15.0 | 165 | 0.1869 | 0.8309 | 0.2454 | 0.3502 |
66
+ | No log | 16.0 | 176 | 0.1826 | 0.8309 | 0.2454 | 0.3502 |
67
+ | No log | 17.0 | 187 | 0.1797 | 0.8252 | 0.245 | 0.3488 |
68
+ | No log | 18.0 | 198 | 0.1749 | 0.8353 | 0.2479 | 0.3535 |
69
+ | No log | 19.0 | 209 | 0.1726 | 0.8393 | 0.2508 | 0.3566 |
70
+ | No log | 20.0 | 220 | 0.1716 | 0.8373 | 0.2475 | 0.3538 |
71
+ | No log | 21.0 | 231 | 0.1695 | 0.8472 | 0.2489 | 0.3553 |
72
+ | No log | 22.0 | 242 | 0.1693 | 0.8472 | 0.2519 | 0.3589 |
73
+ | No log | 23.0 | 253 | 0.1685 | 0.877 | 0.271 | 0.3808 |
74
+ | No log | 24.0 | 264 | 0.1668 | 0.8552 | 0.2598 | 0.3666 |
75
+ | No log | 25.0 | 275 | 0.1641 | 0.8552 | 0.252 | 0.3591 |
76
+ | No log | 26.0 | 286 | 0.1628 | 0.8671 | 0.2598 | 0.3683 |
77
+ | No log | 27.0 | 297 | 0.1617 | 0.8631 | 0.2595 | 0.3674 |
78
+ | No log | 28.0 | 308 | 0.1611 | 0.8631 | 0.2595 | 0.3674 |
79
+ | No log | 29.0 | 319 | 0.1611 | 0.8631 | 0.2595 | 0.3674 |
80
+ | No log | 30.0 | 330 | 0.1611 | 0.8631 | 0.2595 | 0.3674 |
81
 
82
 
83
  ### Framework versions
84
 
85
+ - Transformers 4.21.3
86
  - Pytorch 1.12.1+cu113
87
  - Datasets 2.4.0
88
  - Tokenizers 0.12.1