gsarti commited on
Commit
525e3b8
1 Parent(s): 33f2655

Initial commit

Browse files
README.md CHANGED
@@ -1,3 +1,103 @@
1
  ---
2
- license: apache-2.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: mit
3
+ tags:
4
+ - generated_from_trainer
5
+ datasets:
6
+ - it5/datasets
7
+ metrics:
8
+ - rouge
9
+ model-index:
10
+ - name: it5-efficient-small-el32-fst-i2f-0.0003
11
+ results:
12
+ - task:
13
+ name: Summarization
14
+ type: summarization
15
+ dataset:
16
+ name: it5/datasets fst
17
+ type: it5/datasets
18
+ args: fst
19
+ metrics:
20
+ - name: Rouge1
21
+ type: rouge
22
+ value: 56.585
23
  ---
24
+
25
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
26
+ should probably proofread and complete it, then remove this comment. -->
27
+
28
+ # it5-efficient-small-el32-fst-i2f-0.0003
29
+
30
+ This model is a fine-tuned version of [stefan-it/it5-efficient-small-el32](https://huggingface.co/stefan-it/it5-efficient-small-el32) on the it5/datasets fst dataset.
31
+ It achieves the following results on the evaluation set:
32
+ - Loss: 2.2160
33
+ - Rouge1: 56.585
34
+ - Rouge2: 36.9335
35
+ - Rougel: 53.7782
36
+ - Rougelsum: 53.7779
37
+ - Gen Len: 13.0891
38
+
39
+ ## Model description
40
+
41
+ More information needed
42
+
43
+ ## Intended uses & limitations
44
+
45
+ More information needed
46
+
47
+ ## Training and evaluation data
48
+
49
+ More information needed
50
+
51
+ ## Training procedure
52
+
53
+ ### Training hyperparameters
54
+
55
+ The following hyperparameters were used during training:
56
+ - learning_rate: 0.0003
57
+ - train_batch_size: 8
58
+ - eval_batch_size: 8
59
+ - seed: 42
60
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
61
+ - lr_scheduler_type: linear
62
+ - num_epochs: 10.0
63
+
64
+ ### Training results
65
+
66
+ | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
67
+ |:-------------:|:-----:|:------:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
68
+ | 2.9377 | 0.35 | 5000 | 2.5157 | 54.6148 | 35.1518 | 51.8908 | 51.8957 | 12.8717 |
69
+ | 2.803 | 0.7 | 10000 | 2.4086 | 55.641 | 36.1214 | 52.8683 | 52.8572 | 12.7513 |
70
+ | 2.5483 | 1.05 | 15000 | 2.3420 | 55.6604 | 36.0085 | 52.9599 | 52.9433 | 12.7754 |
71
+ | 2.4978 | 1.39 | 20000 | 2.3145 | 56.204 | 36.5896 | 53.338 | 53.3351 | 12.8804 |
72
+ | 2.5383 | 1.74 | 25000 | 2.2697 | 56.1356 | 36.6963 | 53.3579 | 53.3664 | 12.795 |
73
+ | 2.3368 | 2.09 | 30000 | 2.2603 | 56.0271 | 36.4249 | 53.3113 | 53.3272 | 12.7478 |
74
+ | 2.371 | 2.44 | 35000 | 2.2328 | 56.5041 | 36.8718 | 53.8064 | 53.7995 | 12.8243 |
75
+ | 2.3567 | 2.79 | 40000 | 2.2079 | 56.5318 | 36.9437 | 53.8359 | 53.8254 | 12.6851 |
76
+ | 2.1753 | 3.14 | 45000 | 2.2168 | 56.3831 | 36.8896 | 53.6542 | 53.6708 | 12.67 |
77
+ | 2.2069 | 3.48 | 50000 | 2.2055 | 56.7171 | 37.1665 | 53.9299 | 53.9259 | 12.8014 |
78
+ | 2.2396 | 3.83 | 55000 | 2.1801 | 56.936 | 37.5465 | 54.1064 | 54.1125 | 12.7989 |
79
+ | 2.0657 | 4.18 | 60000 | 2.1915 | 56.6312 | 37.1618 | 53.8646 | 53.8791 | 12.6987 |
80
+ | 2.0806 | 4.53 | 65000 | 2.1809 | 56.6599 | 37.1282 | 53.8838 | 53.8781 | 12.715 |
81
+ | 2.0933 | 4.88 | 70000 | 2.1771 | 56.5891 | 36.9461 | 53.8058 | 53.8087 | 12.6593 |
82
+ | 1.9949 | 5.23 | 75000 | 2.1932 | 56.4956 | 36.9679 | 53.7634 | 53.7731 | 12.6723 |
83
+ | 1.9954 | 5.57 | 80000 | 2.1813 | 56.4827 | 36.8319 | 53.6397 | 53.6399 | 12.6599 |
84
+ | 1.9912 | 5.92 | 85000 | 2.1755 | 56.6723 | 37.0432 | 53.8339 | 53.8233 | 12.7534 |
85
+ | 1.9068 | 6.27 | 90000 | 2.1849 | 56.6574 | 37.0691 | 53.9029 | 53.892 | 12.7037 |
86
+ | 1.9173 | 6.62 | 95000 | 2.1787 | 56.5701 | 36.861 | 53.6855 | 53.6699 | 12.6467 |
87
+ | 1.9131 | 6.97 | 100000 | 2.1862 | 56.7175 | 37.0749 | 53.8761 | 53.8794 | 12.7072 |
88
+ | 1.8164 | 7.32 | 105000 | 2.1999 | 56.6104 | 37.0809 | 53.8098 | 53.8216 | 12.6364 |
89
+ | 1.8489 | 7.66 | 110000 | 2.1945 | 56.6645 | 37.1267 | 53.9009 | 53.9008 | 12.5741 |
90
+ | 1.82 | 8.01 | 115000 | 2.2075 | 56.6075 | 37.0359 | 53.8792 | 53.8833 | 12.6428 |
91
+ | 1.772 | 8.36 | 120000 | 2.2067 | 56.4716 | 36.8675 | 53.6826 | 53.6742 | 12.6591 |
92
+ | 1.7795 | 8.71 | 125000 | 2.2056 | 56.4112 | 36.9011 | 53.6554 | 53.6495 | 12.608 |
93
+ | 1.72 | 9.06 | 130000 | 2.2197 | 56.4735 | 36.9255 | 53.6592 | 53.6463 | 12.6758 |
94
+ | 1.7174 | 9.41 | 135000 | 2.2169 | 56.4209 | 36.8139 | 53.5778 | 53.5685 | 12.6568 |
95
+ | 1.7466 | 9.75 | 140000 | 2.2165 | 56.3715 | 36.767 | 53.555 | 53.5468 | 12.6416 |
96
+
97
+
98
+ ### Framework versions
99
+
100
+ - Transformers 4.15.0
101
+ - Pytorch 1.10.0+cu102
102
+ - Datasets 1.17.0
103
+ - Tokenizers 0.10.3
all_results.json ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 10.0,
3
+ "eval_gen_len": 13.0891,
4
+ "eval_loss": 2.216015100479126,
5
+ "eval_rouge1": 56.585,
6
+ "eval_rouge2": 36.9335,
7
+ "eval_rougeL": 53.7782,
8
+ "eval_rougeLsum": 53.7779,
9
+ "eval_runtime": 122.5114,
10
+ "eval_samples": 4849,
11
+ "eval_samples_per_second": 39.58,
12
+ "eval_steps_per_second": 4.955,
13
+ "train_loss": 2.137477010615576,
14
+ "train_runtime": 21978.9943,
15
+ "train_samples": 114830,
16
+ "train_samples_per_second": 52.245,
17
+ "train_steps_per_second": 6.531
18
+ }
config.json ADDED
@@ -0,0 +1,29 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": ".",
3
+ "architectures": [
4
+ "T5ForConditionalGeneration"
5
+ ],
6
+ "d_ff": 2048,
7
+ "d_kv": 64,
8
+ "d_model": 512,
9
+ "decoder_start_token_id": 0,
10
+ "dropout_rate": 0.1,
11
+ "eos_token_id": 1,
12
+ "feed_forward_proj": "relu",
13
+ "initializer_factor": 1.0,
14
+ "is_encoder_decoder": true,
15
+ "layer_norm_epsilon": 1e-06,
16
+ "model_type": "t5",
17
+ "n_positions": 512,
18
+ "num_decoder_layers": 6,
19
+ "num_heads": 8,
20
+ "num_layers": 32,
21
+ "output_past": true,
22
+ "pad_token_id": 0,
23
+ "relative_attention_max_distance": 128,
24
+ "relative_attention_num_buckets": 32,
25
+ "torch_dtype": "float32",
26
+ "transformers_version": "4.15.0",
27
+ "use_cache": true,
28
+ "vocab_size": 32100
29
+ }
eval_results.json ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 10.0,
3
+ "eval_gen_len": 13.0891,
4
+ "eval_loss": 2.216015100479126,
5
+ "eval_rouge1": 56.585,
6
+ "eval_rouge2": 36.9335,
7
+ "eval_rougeL": 53.7782,
8
+ "eval_rougeLsum": 53.7779,
9
+ "eval_runtime": 122.5114,
10
+ "eval_samples": 4849,
11
+ "eval_samples_per_second": 39.58,
12
+ "eval_steps_per_second": 4.955
13
+ }
events.out.tfevents.1650965835.pg-gpu29.1324.1 ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c9cdf3a7869346f8fc551251524b70f9d4ad6d6140b88646e8c1ed2c3e6f1c64
3
+ size 5142
flax_model.msgpack ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:28498691275cf13116fde642f4ca19cbd3922087cec9cebc490ba20b938ededd
3
+ size 569246164
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bf7dfe92ee95be47bb037cff6176bf7e4b5858e3ff15d60eea10ab38cabaeaf5
3
+ size 569387035
special_tokens_map.json ADDED
@@ -0,0 +1 @@
 
1
+ {"eos_token": "</s>", "unk_token": "<unk>", "pad_token": "<pad>", "additional_special_tokens": ["<extra_id_0>", "<extra_id_1>", "<extra_id_2>", "<extra_id_3>", "<extra_id_4>", "<extra_id_5>", "<extra_id_6>", "<extra_id_7>", "<extra_id_8>", "<extra_id_9>", "<extra_id_10>", "<extra_id_11>", "<extra_id_12>", "<extra_id_13>", "<extra_id_14>", "<extra_id_15>", "<extra_id_16>", "<extra_id_17>", "<extra_id_18>", "<extra_id_19>", "<extra_id_20>", "<extra_id_21>", "<extra_id_22>", "<extra_id_23>", "<extra_id_24>", "<extra_id_25>", "<extra_id_26>", "<extra_id_27>", "<extra_id_28>", "<extra_id_29>", "<extra_id_30>", "<extra_id_31>", "<extra_id_32>", "<extra_id_33>", "<extra_id_34>", "<extra_id_35>", "<extra_id_36>", "<extra_id_37>", "<extra_id_38>", "<extra_id_39>", "<extra_id_40>", "<extra_id_41>", "<extra_id_42>", "<extra_id_43>", "<extra_id_44>", "<extra_id_45>", "<extra_id_46>", "<extra_id_47>", "<extra_id_48>", "<extra_id_49>", "<extra_id_50>", "<extra_id_51>", "<extra_id_52>", "<extra_id_53>", "<extra_id_54>", "<extra_id_55>", "<extra_id_56>", "<extra_id_57>", "<extra_id_58>", "<extra_id_59>", "<extra_id_60>", "<extra_id_61>", "<extra_id_62>", "<extra_id_63>", "<extra_id_64>", "<extra_id_65>", "<extra_id_66>", "<extra_id_67>", "<extra_id_68>", "<extra_id_69>", "<extra_id_70>", "<extra_id_71>", "<extra_id_72>", "<extra_id_73>", "<extra_id_74>", "<extra_id_75>", "<extra_id_76>", "<extra_id_77>", "<extra_id_78>", "<extra_id_79>", "<extra_id_80>", "<extra_id_81>", "<extra_id_82>", "<extra_id_83>", "<extra_id_84>", "<extra_id_85>", "<extra_id_86>", "<extra_id_87>", "<extra_id_88>", "<extra_id_89>", "<extra_id_90>", "<extra_id_91>", "<extra_id_92>", "<extra_id_93>", "<extra_id_94>", "<extra_id_95>", "<extra_id_96>", "<extra_id_97>", "<extra_id_98>", "<extra_id_99>"]}
spiece.model ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2dffd01fc009b7e92d98eddff8853983e271b41302ed0d363000e8581df12000
3
+ size 817200
tf_model.h5 ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:34af5ad7f47c17bdd8839e7c18e80027dc97b166da413fdd3c6009db94f8d5ae
3
+ size 569947488
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
tokenizer_config.json ADDED
@@ -0,0 +1 @@
 
1
+ {"eos_token": "</s>", "unk_token": "<unk>", "pad_token": "<pad>", "extra_ids": 100, "additional_special_tokens": ["<extra_id_0>", "<extra_id_1>", "<extra_id_2>", "<extra_id_3>", "<extra_id_4>", "<extra_id_5>", "<extra_id_6>", "<extra_id_7>", "<extra_id_8>", "<extra_id_9>", "<extra_id_10>", "<extra_id_11>", "<extra_id_12>", "<extra_id_13>", "<extra_id_14>", "<extra_id_15>", "<extra_id_16>", "<extra_id_17>", "<extra_id_18>", "<extra_id_19>", "<extra_id_20>", "<extra_id_21>", "<extra_id_22>", "<extra_id_23>", "<extra_id_24>", "<extra_id_25>", "<extra_id_26>", "<extra_id_27>", "<extra_id_28>", "<extra_id_29>", "<extra_id_30>", "<extra_id_31>", "<extra_id_32>", "<extra_id_33>", "<extra_id_34>", "<extra_id_35>", "<extra_id_36>", "<extra_id_37>", "<extra_id_38>", "<extra_id_39>", "<extra_id_40>", "<extra_id_41>", "<extra_id_42>", "<extra_id_43>", "<extra_id_44>", "<extra_id_45>", "<extra_id_46>", "<extra_id_47>", "<extra_id_48>", "<extra_id_49>", "<extra_id_50>", "<extra_id_51>", "<extra_id_52>", "<extra_id_53>", "<extra_id_54>", "<extra_id_55>", "<extra_id_56>", "<extra_id_57>", "<extra_id_58>", "<extra_id_59>", "<extra_id_60>", "<extra_id_61>", "<extra_id_62>", "<extra_id_63>", "<extra_id_64>", "<extra_id_65>", "<extra_id_66>", "<extra_id_67>", "<extra_id_68>", "<extra_id_69>", "<extra_id_70>", "<extra_id_71>", "<extra_id_72>", "<extra_id_73>", "<extra_id_74>", "<extra_id_75>", "<extra_id_76>", "<extra_id_77>", "<extra_id_78>", "<extra_id_79>", "<extra_id_80>", "<extra_id_81>", "<extra_id_82>", "<extra_id_83>", "<extra_id_84>", "<extra_id_85>", "<extra_id_86>", "<extra_id_87>", "<extra_id_88>", "<extra_id_89>", "<extra_id_90>", "<extra_id_91>", "<extra_id_92>", "<extra_id_93>", "<extra_id_94>", "<extra_id_95>", "<extra_id_96>", "<extra_id_97>", "<extra_id_98>", "<extra_id_99>"], "special_tokens_map_file": null, "name_or_path": "stefan-it/it5-efficient-small-el32", "sp_model_kwargs": {}, "tokenizer_class": "T5Tokenizer"}
train_results.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 10.0,
3
+ "train_loss": 2.137477010615576,
4
+ "train_runtime": 21978.9943,
5
+ "train_samples": 114830,
6
+ "train_samples_per_second": 52.245,
7
+ "train_steps_per_second": 6.531
8
+ }
trainer_state.json ADDED
@@ -0,0 +1,2111 @@