Logeswaransr commited on
Commit
f892ab0
1 Parent(s): c8002d3

End of training

Browse files
Files changed (5) hide show
  1. README.md +17 -27
  2. config.json +1 -1
  3. generation_config.json +1 -1
  4. pytorch_model.bin +1 -1
  5. training_args.bin +1 -1
README.md CHANGED
@@ -17,11 +17,11 @@ should probably proofread and complete it, then remove this comment. -->
17
 
18
  This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on an unknown dataset.
19
  It achieves the following results on the evaluation set:
20
- - Loss: 0.7199
21
- - Rouge1: 0.5828
22
- - Rouge2: 0.2796
23
- - Rougel: 0.5828
24
- - Rougelsum: 0.5828
25
 
26
  ## Model description
27
 
@@ -46,37 +46,27 @@ The following hyperparameters were used during training:
46
  - seed: 42
47
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
48
  - lr_scheduler_type: linear
49
- - num_epochs: 20
50
 
51
  ### Training results
52
 
53
  | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
54
  |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|
55
- | No log | 1.0 | 2 | 12.8104 | 0.0 | 0.0 | 0.0 | 0.0 |
56
- | No log | 2.0 | 4 | 7.4162 | 0.0 | 0.0 | 0.0 | 0.0 |
57
- | No log | 3.0 | 6 | 4.6275 | 0.0 | 0.0 | 0.0 | 0.0 |
58
- | No log | 4.0 | 8 | 4.2136 | 0.0 | 0.0 | 0.0 | 0.0 |
59
- | No log | 5.0 | 10 | 3.7987 | 0.0 | 0.0 | 0.0 | 0.0 |
60
- | No log | 6.0 | 12 | 3.4665 | 0.0513 | 0.0 | 0.0256 | 0.0513 |
61
- | No log | 7.0 | 14 | 3.1709 | 0.1846 | 0.0 | 0.1590 | 0.1846 |
62
- | No log | 8.0 | 16 | 2.8300 | 0.1846 | 0.0 | 0.1590 | 0.1846 |
63
- | No log | 9.0 | 18 | 2.5395 | 0.3513 | 0.0 | 0.3256 | 0.3513 |
64
- | No log | 10.0 | 20 | 2.3414 | 0.3256 | 0.0 | 0.3256 | 0.3256 |
65
- | No log | 11.0 | 22 | 2.1369 | 0.3256 | 0.0 | 0.3256 | 0.3256 |
66
- | No log | 12.0 | 24 | 1.9783 | 0.3256 | 0.0 | 0.3256 | 0.3256 |
67
- | No log | 13.0 | 26 | 1.7889 | 0.3256 | 0.0 | 0.3256 | 0.3256 |
68
- | No log | 14.0 | 28 | 1.5654 | 0.3513 | 0.0 | 0.3256 | 0.3513 |
69
- | No log | 15.0 | 30 | 1.3210 | 0.3317 | 0.0 | 0.3317 | 0.3317 |
70
- | No log | 16.0 | 32 | 1.0739 | 0.5828 | 0.2796 | 0.5828 | 0.5828 |
71
- | No log | 17.0 | 34 | 0.8915 | 0.5828 | 0.2796 | 0.5828 | 0.5828 |
72
- | No log | 18.0 | 36 | 0.7844 | 0.5828 | 0.2796 | 0.5828 | 0.5828 |
73
- | No log | 19.0 | 38 | 0.7356 | 0.5828 | 0.2796 | 0.5828 | 0.5828 |
74
- | No log | 20.0 | 40 | 0.7199 | 0.5828 | 0.2796 | 0.5828 | 0.5828 |
75
 
76
 
77
  ### Framework versions
78
 
79
- - Transformers 4.33.2
80
  - Pytorch 2.0.1+cu118
81
  - Datasets 2.14.5
82
  - Tokenizers 0.13.3
 
17
 
18
  This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on an unknown dataset.
19
  It achieves the following results on the evaluation set:
20
+ - Loss: 1.4868
21
+ - Rouge1: 0.1544
22
+ - Rouge2: 0.0388
23
+ - Rougel: 0.1494
24
+ - Rougelsum: 0.1493
25
 
26
  ## Model description
27
 
 
46
  - seed: 42
47
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
48
  - lr_scheduler_type: linear
49
+ - num_epochs: 10
50
 
51
  ### Training results
52
 
53
  | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
54
  |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|
55
+ | No log | 1.0 | 375 | 0.8740 | 0.1454 | 0.0294 | 0.1416 | 0.1420 |
56
+ | 1.2229 | 2.0 | 750 | 0.8700 | 0.1572 | 0.0445 | 0.1526 | 0.1533 |
57
+ | 0.6201 | 3.0 | 1125 | 0.9088 | 0.1639 | 0.0461 | 0.1616 | 0.1606 |
58
+ | 0.4623 | 4.0 | 1500 | 0.9650 | 0.1581 | 0.0457 | 0.1533 | 0.1537 |
59
+ | 0.4623 | 5.0 | 1875 | 1.0441 | 0.1487 | 0.0332 | 0.1436 | 0.1437 |
60
+ | 0.3399 | 6.0 | 2250 | 1.1880 | 0.1581 | 0.0436 | 0.1528 | 0.1533 |
61
+ | 0.2692 | 7.0 | 2625 | 1.2633 | 0.1582 | 0.0423 | 0.1539 | 0.1547 |
62
+ | 0.2233 | 8.0 | 3000 | 1.3449 | 0.1624 | 0.0409 | 0.1590 | 0.1593 |
63
+ | 0.2233 | 9.0 | 3375 | 1.4225 | 0.1555 | 0.0401 | 0.1513 | 0.1507 |
64
+ | 0.183 | 10.0 | 3750 | 1.4868 | 0.1544 | 0.0388 | 0.1494 | 0.1493 |
 
 
 
 
 
 
 
 
 
 
65
 
66
 
67
  ### Framework versions
68
 
69
+ - Transformers 4.33.3
70
  - Pytorch 2.0.1+cu118
71
  - Datasets 2.14.5
72
  - Tokenizers 0.13.3
config.json CHANGED
@@ -56,7 +56,7 @@
56
  },
57
  "tie_word_embeddings": false,
58
  "torch_dtype": "float32",
59
- "transformers_version": "4.33.2",
60
  "use_cache": true,
61
  "vocab_size": 32128
62
  }
 
56
  },
57
  "tie_word_embeddings": false,
58
  "torch_dtype": "float32",
59
+ "transformers_version": "4.33.3",
60
  "use_cache": true,
61
  "vocab_size": 32128
62
  }
generation_config.json CHANGED
@@ -2,5 +2,5 @@
2
  "decoder_start_token_id": 0,
3
  "eos_token_id": 1,
4
  "pad_token_id": 0,
5
- "transformers_version": "4.33.2"
6
  }
 
2
  "decoder_start_token_id": 0,
3
  "eos_token_id": 1,
4
  "pad_token_id": 0,
5
+ "transformers_version": "4.33.3"
6
  }
pytorch_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:a7010807f770de827cbb817107d7bda9a0e6cfe0e2e8e0fa6bbd49ef2ff89a91
3
  size 990404917
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0a518abee707727d3519a288651cd918fe2aa6b2bb06d09cafb82ed66492c4c1
3
  size 990404917
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:cf67d53c25083a10bda06dbc61d7c345dfa85f306f7108b5697a6dc6ca0d46cd
3
  size 4155
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5c9d72430a4cbbbc26ed283b63795725c53ca45d4841c3ac920f1c97ac0f1376
3
  size 4155