andreaparker commited on
Commit
85e65d1
1 Parent(s): cc2f763

flan_t5_test_2023_01_31

Browse files
Files changed (3) hide show
  1. README.md +84 -0
  2. generation_config.json +7 -0
  3. pytorch_model.bin +1 -1
README.md ADDED
@@ -0,0 +1,84 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - flan_t5_test_2023_01_31
4
+ license: apache-2.0
5
+ tags:
6
+ - generated_from_trainer
7
+ datasets:
8
+ - samsum
9
+ metrics:
10
+ - rouge
11
+ model-index:
12
+ - name: flan-t5-base-samsum
13
+ results:
14
+ - task:
15
+ name: Sequence-to-sequence Language Modeling
16
+ type: text2text-generation
17
+ dataset:
18
+ name: samsum
19
+ type: samsum
20
+ config: samsum
21
+ split: test
22
+ args: samsum
23
+ metrics:
24
+ - name: Rouge1
25
+ type: rouge
26
+ value: 47.4339
27
+ ---
28
+
29
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
30
+ should probably proofread and complete it, then remove this comment. -->
31
+
32
+ # flan-t5-base-samsum
33
+
34
+ This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on the samsum dataset.
35
+ It achieves the following results on the evaluation set:
36
+ - Loss: 1.3772
37
+ - Rouge1: 47.4339
38
+ - Rouge2: 23.9608
39
+ - Rougel: 40.0566
40
+ - Rougelsum: 43.6981
41
+ - Gen Len: 17.3162
42
+
43
+ ## Model description
44
+
45
+ More information needed
46
+
47
+ ## Intended uses & limitations
48
+
49
+ More information needed
50
+
51
+ ## Training and evaluation data
52
+
53
+ More information needed
54
+
55
+ ## Training procedure
56
+
57
+ ### Training hyperparameters
58
+
59
+ The following hyperparameters were used during training:
60
+ - learning_rate: 5e-05
61
+ - train_batch_size: 8
62
+ - eval_batch_size: 8
63
+ - seed: 42
64
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
65
+ - lr_scheduler_type: linear
66
+ - num_epochs: 5
67
+
68
+ ### Training results
69
+
70
+ | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
71
+ |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
72
+ | 1.4403 | 1.0 | 1842 | 1.3829 | 46.5338 | 23.1342 | 39.4468 | 42.8518 | 17.0977 |
73
+ | 1.3534 | 2.0 | 3684 | 1.3732 | 47.0913 | 23.5016 | 39.5941 | 43.238 | 17.4554 |
74
+ | 1.2795 | 3.0 | 5526 | 1.3709 | 46.8916 | 23.3226 | 39.5661 | 43.1582 | 17.2027 |
75
+ | 1.2313 | 4.0 | 7368 | 1.3736 | 47.441 | 23.7501 | 40.0446 | 43.6336 | 17.2198 |
76
+ | 1.1934 | 5.0 | 9210 | 1.3772 | 47.4339 | 23.9608 | 40.0566 | 43.6981 | 17.3162 |
77
+
78
+
79
+ ### Framework versions
80
+
81
+ - Transformers 4.26.0
82
+ - Pytorch 1.13.1+cu116
83
+ - Datasets 2.9.0
84
+ - Tokenizers 0.13.2
generation_config.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "decoder_start_token_id": 0,
4
+ "eos_token_id": 1,
5
+ "pad_token_id": 0,
6
+ "transformers_version": "4.26.0"
7
+ }
pytorch_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:6967554cd20ce84ece990620be96d288d88ac16d7952849c2e8c873f5a279769
3
  size 990408885
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a70e77be9b6270e9ec28accaee7fb761fe6627fbc979d38c205591fd4bb33bfa
3
  size 990408885