sharmax-vikas commited on
Commit
3c43540
1 Parent(s): ba5a953

End of training

Browse files
README.md CHANGED
@@ -1,4 +1,5 @@
1
  ---
 
2
  license: apache-2.0
3
  base_model: google/flan-t5-base
4
  tags:
@@ -22,7 +23,7 @@ model-index:
22
  metrics:
23
  - name: Rouge1
24
  type: rouge
25
- value: 47.2141
26
  ---
27
 
28
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -32,12 +33,12 @@ should probably proofread and complete it, then remove this comment. -->
32
 
33
  This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on the samsum dataset.
34
  It achieves the following results on the evaluation set:
35
- - Loss: 1.3723
36
- - Rouge1: 47.2141
37
- - Rouge2: 23.4799
38
- - Rougel: 39.7474
39
- - Rougelsum: 43.3222
40
- - Gen Len: 17.2589
41
 
42
  ## Model description
43
 
@@ -68,49 +69,16 @@ The following hyperparameters were used during training:
68
 
69
  | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
70
  |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
71
- | 1.4665 | 1.0 | 921 | 1.3915 | 46.9661 | 23.1441 | 39.2886 | 43.1249 | 17.2894 |
72
- | 1.3722 | 2.0 | 1842 | 1.3778 | 47.1196 | 23.1221 | 39.6222 | 43.3404 | 17.1905 |
73
- | 1.3145 | 3.0 | 2763 | 1.3723 | 47.2141 | 23.4799 | 39.7474 | 43.3222 | 17.2589 |
74
- | 1.2767 | 4.0 | 3684 | 1.3787 | 47.1852 | 23.5757 | 39.7355 | 43.4915 | 17.4554 |
75
- | 1.257 | 5.0 | 4605 | 1.3742 | 47.4921 | 23.6605 | 39.9254 | 43.7327 | 17.3529 |
76
 
77
 
78
  ### Framework versions
79
 
80
- - Transformers 4.42.4
81
- - Pytorch 2.1.2
82
- - Datasets 2.20.0
83
  - Tokenizers 0.19.1
84
-
85
- ### How to use model
86
- ```py
87
- from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, pipeline
88
-
89
- ckpt = 'sharmax-vikas/flan-t5-base-samsum'
90
- tokenizer = AutoTokenizer.from_pretrained(ckpt)
91
- # Use AutoModelForSeq2SeqLM for text generation tasks like summarization
92
- model = AutoModelForSeq2SeqLM.from_pretrained(ckpt)
93
-
94
- summarize = pipeline('summarization', tokenizer=tokenizer, model=model)
95
-
96
- result = summarize('''Hannah: Hey, do you have Betty's number?
97
- Amanda: Lemme check
98
- Hannah: <file_gif>
99
- Amanda: Sorry, can't find it.
100
- Amanda: Ask Larry
101
- Amanda: He called her last time we were at the park together
102
- Hannah: I don't know him well
103
- Hannah: <file_gif>
104
- Amanda: Don't be shy, he's very nice
105
- Hannah: If you say so..
106
- Hannah: I'd rather you texted him
107
- Amanda: Just text him 🙂
108
- Hannah: Urgh.. Alright
109
- Hannah: Bye
110
- Amanda: Bye bye''')
111
-
112
- print(result[0])
113
-
114
- #{'summary_text': "Amanda can't find Betty's number. Amanda will ask Larry. Larry called Betty last time they were at the park together."}
115
- ```
116
-
 
1
  ---
2
+ library_name: transformers
3
  license: apache-2.0
4
  base_model: google/flan-t5-base
5
  tags:
 
23
  metrics:
24
  - name: Rouge1
25
  type: rouge
26
+ value: 47.355
27
  ---
28
 
29
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
33
 
34
  This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on the samsum dataset.
35
  It achieves the following results on the evaluation set:
36
+ - Loss: 1.3736
37
+ - Rouge1: 47.355
38
+ - Rouge2: 23.7601
39
+ - Rougel: 39.8403
40
+ - Rougelsum: 43.4718
41
+ - Gen Len: 17.1575
42
 
43
  ## Model description
44
 
 
69
 
70
  | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
71
  |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
72
+ | 1.3641 | 1.0 | 921 | 1.3780 | 47.4054 | 23.6308 | 39.8273 | 43.3697 | 17.3004 |
73
+ | 1.3074 | 2.0 | 1842 | 1.3736 | 47.355 | 23.7601 | 39.8403 | 43.4718 | 17.1575 |
74
+ | 1.2592 | 3.0 | 2763 | 1.3740 | 47.2208 | 23.4972 | 39.7293 | 43.2546 | 17.2320 |
75
+ | 1.2232 | 4.0 | 3684 | 1.3794 | 47.9156 | 24.2451 | 40.2628 | 43.9122 | 17.4017 |
76
+ | 1.2042 | 5.0 | 4605 | 1.3780 | 47.8982 | 24.1707 | 40.2955 | 43.8939 | 17.3712 |
77
 
78
 
79
  ### Framework versions
80
 
81
+ - Transformers 4.44.2
82
+ - Pytorch 2.4.0
83
+ - Datasets 3.0.0
84
  - Tokenizers 0.19.1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
config.json CHANGED
@@ -56,7 +56,7 @@
56
  },
57
  "tie_word_embeddings": false,
58
  "torch_dtype": "float32",
59
- "transformers_version": "4.42.4",
60
  "use_cache": true,
61
  "vocab_size": 32128
62
  }
 
56
  },
57
  "tie_word_embeddings": false,
58
  "torch_dtype": "float32",
59
+ "transformers_version": "4.44.2",
60
  "use_cache": true,
61
  "vocab_size": 32128
62
  }
generation_config.json CHANGED
@@ -2,5 +2,5 @@
2
  "decoder_start_token_id": 0,
3
  "eos_token_id": 1,
4
  "pad_token_id": 0,
5
- "transformers_version": "4.42.4"
6
  }
 
2
  "decoder_start_token_id": 0,
3
  "eos_token_id": 1,
4
  "pad_token_id": 0,
5
+ "transformers_version": "4.44.2"
6
  }
logs/events.out.tfevents.1726379915.657a9089ab8d.36.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:134ac600379dabcc66ca99f1c88171665b0b2199d15ff8317e1bc34a50ab7226
3
+ size 5940
logs/events.out.tfevents.1726380017.657a9089ab8d.36.1 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4d61f58e786f463254e1d52083a0f2783f7360fa4a8d14ad650b5facec0bff37
3
+ size 4184
logs/events.out.tfevents.1726380067.657a9089ab8d.36.2 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a413066a31982a2d1a5bf2f0f550af07a4e69a314a3a7019fe42e6dfc7cf7e4c
3
+ size 4184
logs/events.out.tfevents.1726380194.657a9089ab8d.331.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bdce85d06e5fd68eac3765f81050d0e5f7e4209e4cb61f7fe7a2bad0d2d5e822
3
+ size 22733
logs/events.out.tfevents.1726389304.657a9089ab8d.331.1 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7cf8c8ea225a9ce9ca794f4bb5c43c6f77008879cae9d7f4f770c4eee59cdfbb
3
+ size 613
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:3e08159150fca9e44566a5ea96bde2ce5315ae7ce8cd0f3a78d82dfd6d852e96
3
  size 990345064
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:de3d3e6034014986f55afa45f20bc535137a3e618ea622b526766cb7d4d57b78
3
  size 990345064
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:7d0849448372672a9662898c51ca798c01fd1a16bd762695e499ca30ae9dcc82
3
- size 5304
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:84c5512ba816dbf43b079d321a7083d7bfda6d269b30b9d235c598ab1ce44886
3
+ size 5368