svetaku commited on
Commit
4848e96
1 Parent(s): c91d264

Training complete

Browse files
README.md ADDED
@@ -0,0 +1,71 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ base_model: google/mt5-small
4
+ tags:
5
+ - summarization
6
+ - generated_from_trainer
7
+ metrics:
8
+ - rouge
9
+ model-index:
10
+ - name: mt5-small-finetuned-news-summary-kaggle
11
+ results: []
12
+ ---
13
+
14
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
+ should probably proofread and complete it, then remove this comment. -->
16
+
17
+ # mt5-small-finetuned-news-summary-kaggle
18
+
19
+ This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset.
20
+ It achieves the following results on the evaluation set:
21
+ - Loss: 2.6907
22
+ - Rouge1: 26.6547
23
+ - Rouge2: 10.1
24
+ - Rougel: 24.0137
25
+ - Rougelsum: 23.9999
26
+
27
+ ## Model description
28
+
29
+ More information needed
30
+
31
+ ## Intended uses & limitations
32
+
33
+ More information needed
34
+
35
+ ## Training and evaluation data
36
+
37
+ More information needed
38
+
39
+ ## Training procedure
40
+
41
+ ### Training hyperparameters
42
+
43
+ The following hyperparameters were used during training:
44
+ - learning_rate: 5.6e-05
45
+ - train_batch_size: 16
46
+ - eval_batch_size: 16
47
+ - seed: 42
48
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
49
+ - lr_scheduler_type: linear
50
+ - num_epochs: 8
51
+
52
+ ### Training results
53
+
54
+ | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
55
+ |:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|
56
+ | No log | 1.0 | 220 | 3.9956 | 14.9021 | 3.3744 | 13.4763 | 13.499 |
57
+ | 8.3183 | 2.0 | 440 | 3.1550 | 17.9472 | 5.9671 | 16.6974 | 16.6959 |
58
+ | 8.3183 | 3.0 | 660 | 2.8950 | 21.2665 | 7.4266 | 19.5041 | 19.4837 |
59
+ | 4.0457 | 4.0 | 880 | 2.8087 | 25.063 | 9.4484 | 22.746 | 22.7351 |
60
+ | 4.0457 | 5.0 | 1100 | 2.7375 | 25.5269 | 9.4299 | 23.0623 | 23.0075 |
61
+ | 3.6505 | 6.0 | 1320 | 2.7091 | 25.8308 | 9.3392 | 23.2001 | 23.1586 |
62
+ | 3.6505 | 7.0 | 1540 | 2.6949 | 26.2177 | 9.8536 | 23.5946 | 23.6358 |
63
+ | 3.5175 | 8.0 | 1760 | 2.6907 | 26.6547 | 10.1 | 24.0137 | 23.9999 |
64
+
65
+
66
+ ### Framework versions
67
+
68
+ - Transformers 4.39.3
69
+ - Pytorch 2.1.2
70
+ - Datasets 2.18.0
71
+ - Tokenizers 0.15.2
generation_config.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "decoder_start_token_id": 0,
3
+ "eos_token_id": 1,
4
+ "pad_token_id": 0,
5
+ "transformers_version": "4.39.3"
6
+ }
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:c65f01a0d81d8221f9918059c1185326364baaed202202eae9a9a7eb6c20235c
3
  size 1200729512
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f25170e25dd96c90f18d5c5c357d1c356083317cb4c4332b59cdaa5e5d9a5812
3
  size 1200729512
runs/Apr18_18-38-21_4f1bd1ce9b7b/events.out.tfevents.1713465597.4f1bd1ce9b7b.34.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:1fac9054ba2053c3daf73efb3ed17dba19a9298e00941fd945e812ccdb0552a1
3
- size 8436
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:290fcdce9215a75d3156f640ce6ad3f7538238e1f6441c4be01624b5eff5f802
3
+ size 9949
runs/Apr18_18-38-21_4f1bd1ce9b7b/events.out.tfevents.1713467427.4f1bd1ce9b7b.34.1 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:80a0d122cce936c4a9a1289a7e54e54c09cd422571ffd277ddaeeedb020e2bf6
3
+ size 562