Chaitanya14 commited on
Commit
c5e4d7d
1 Parent(s): 2a7d70f

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +14 -21
README.md CHANGED
@@ -2,8 +2,6 @@
2
  license: apache-2.0
3
  tags:
4
  - generated_from_trainer
5
- metrics:
6
- - rouge
7
  model-index:
8
  - name: flan-t5-base-finetuned-xsum
9
  results: []
@@ -17,11 +15,6 @@ should probably proofread and complete it, then remove this comment. -->
17
  This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on an unknown dataset.
18
  It achieves the following results on the evaluation set:
19
  - Loss: nan
20
- - Rouge1: 2.0327
21
- - Rouge2: 0.0
22
- - Rougel: 2.0327
23
- - Rougelsum: 2.0327
24
- - Gen Len: 5.3478
25
 
26
  ## Model description
27
 
@@ -41,8 +34,8 @@ More information needed
41
 
42
  The following hyperparameters were used during training:
43
  - learning_rate: 2e-05
44
- - train_batch_size: 16
45
- - eval_batch_size: 16
46
  - seed: 42
47
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
48
  - lr_scheduler_type: linear
@@ -50,18 +43,18 @@ The following hyperparameters were used during training:
50
 
51
  ### Training results
52
 
53
- | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
54
- |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
55
- | No log | 1.0 | 13 | nan | 2.0327 | 0.0 | 2.0327 | 2.0327 | 5.3478 |
56
- | No log | 2.0 | 26 | nan | 2.0327 | 0.0 | 2.0327 | 2.0327 | 5.3478 |
57
- | No log | 3.0 | 39 | nan | 2.0327 | 0.0 | 2.0327 | 2.0327 | 5.3478 |
58
- | No log | 4.0 | 52 | nan | 2.0327 | 0.0 | 2.0327 | 2.0327 | 5.3478 |
59
- | No log | 5.0 | 65 | nan | 2.0327 | 0.0 | 2.0327 | 2.0327 | 5.3478 |
60
- | No log | 6.0 | 78 | nan | 2.0327 | 0.0 | 2.0327 | 2.0327 | 5.3478 |
61
- | No log | 7.0 | 91 | nan | 2.0327 | 0.0 | 2.0327 | 2.0327 | 5.3478 |
62
- | No log | 8.0 | 104 | nan | 2.0327 | 0.0 | 2.0327 | 2.0327 | 5.3478 |
63
- | No log | 9.0 | 117 | nan | 2.0327 | 0.0 | 2.0327 | 2.0327 | 5.3478 |
64
- | No log | 10.0 | 130 | nan | 2.0327 | 0.0 | 2.0327 | 2.0327 | 5.3478 |
65
 
66
 
67
  ### Framework versions
 
2
  license: apache-2.0
3
  tags:
4
  - generated_from_trainer
 
 
5
  model-index:
6
  - name: flan-t5-base-finetuned-xsum
7
  results: []
 
15
  This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on an unknown dataset.
16
  It achieves the following results on the evaluation set:
17
  - Loss: nan
 
 
 
 
 
18
 
19
  ## Model description
20
 
 
34
 
35
  The following hyperparameters were used during training:
36
  - learning_rate: 2e-05
37
+ - train_batch_size: 32
38
+ - eval_batch_size: 32
39
  - seed: 42
40
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
41
  - lr_scheduler_type: linear
 
43
 
44
  ### Training results
45
 
46
+ | Training Loss | Epoch | Step | Validation Loss |
47
+ |:-------------:|:-----:|:----:|:---------------:|
48
+ | No log | 1.0 | 7 | nan |
49
+ | No log | 2.0 | 14 | nan |
50
+ | No log | 3.0 | 21 | nan |
51
+ | No log | 4.0 | 28 | nan |
52
+ | No log | 5.0 | 35 | nan |
53
+ | No log | 6.0 | 42 | nan |
54
+ | No log | 7.0 | 49 | nan |
55
+ | No log | 8.0 | 56 | nan |
56
+ | No log | 9.0 | 63 | nan |
57
+ | No log | 10.0 | 70 | nan |
58
 
59
 
60
  ### Framework versions