KonradSzafer commited on
Commit
16bb14c
1 Parent(s): 1b6cab1

google/flan-t5-small

Browse files
Files changed (1) hide show
  1. README.md +12 -29
README.md CHANGED
@@ -4,24 +4,9 @@ tags:
4
  - generated_from_trainer
5
  datasets:
6
  - samsum
7
- metrics:
8
- - rouge
9
  model-index:
10
  - name: flan-t5-small-samsum
11
- results:
12
- - task:
13
- name: Sequence-to-sequence Language Modeling
14
- type: text2text-generation
15
- dataset:
16
- name: samsum
17
- type: samsum
18
- config: samsum
19
- split: test
20
- args: samsum
21
- metrics:
22
- - name: Rouge1
23
- type: rouge
24
- value: 41.8884
25
  ---
26
 
27
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -31,12 +16,17 @@ should probably proofread and complete it, then remove this comment. -->
31
 
32
  This model is a fine-tuned version of [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) on the samsum dataset.
33
  It achieves the following results on the evaluation set:
34
- - Loss: 1.7427
35
- - Rouge1: 41.8884
36
- - Rouge2: 17.881
37
- - Rougel: 34.4405
38
- - Rougelsum: 38.1283
39
- - Gen Len: 16.8437
 
 
 
 
 
40
 
41
  ## Model description
42
 
@@ -63,13 +53,6 @@ The following hyperparameters were used during training:
63
  - lr_scheduler_type: linear
64
  - num_epochs: 1
65
 
66
- ### Training results
67
-
68
- | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
69
- |:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
70
- | 1.9011 | 1.0 | 32 | 1.7427 | 41.8884 | 17.881 | 34.4405 | 38.1283 | 16.8437 |
71
-
72
-
73
  ### Framework versions
74
 
75
  - Transformers 4.26.1
 
4
  - generated_from_trainer
5
  datasets:
6
  - samsum
 
 
7
  model-index:
8
  - name: flan-t5-small-samsum
9
+ results: []
 
 
 
 
 
 
 
 
 
 
 
 
 
10
  ---
11
 
12
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
16
 
17
  This model is a fine-tuned version of [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) on the samsum dataset.
18
  It achieves the following results on the evaluation set:
19
+ - eval_loss: 1.7427
20
+ - eval_rouge1: 41.8933
21
+ - eval_rouge2: 17.8662
22
+ - eval_rougeL: 34.4336
23
+ - eval_rougeLsum: 38.1127
24
+ - eval_gen_len: 16.8437
25
+ - eval_runtime: 23.6635
26
+ - eval_samples_per_second: 34.61
27
+ - eval_steps_per_second: 2.197
28
+ - epoch: 1.0
29
+ - step: 32
30
 
31
  ## Model description
32
 
 
53
  - lr_scheduler_type: linear
54
  - num_epochs: 1
55
 
 
 
 
 
 
 
 
56
  ### Framework versions
57
 
58
  - Transformers 4.26.1