LeoR2 commited on
Commit
508d597
1 Parent(s): d8e1a46

End of training

Browse files
README.md CHANGED
@@ -3,6 +3,8 @@ license: apache-2.0
3
  base_model: google/flan-t5-small
4
  tags:
5
  - generated_from_trainer
 
 
6
  model-index:
7
  - name: flan-t5-small-finetuned-Coca
8
  results: []
@@ -14,6 +16,13 @@ should probably proofread and complete it, then remove this comment. -->
14
  # flan-t5-small-finetuned-Coca
15
 
16
  This model is a fine-tuned version of [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) on the None dataset.
 
 
 
 
 
 
 
17
 
18
  ## Model description
19
 
@@ -32,20 +41,21 @@ More information needed
32
  ### Training hyperparameters
33
 
34
  The following hyperparameters were used during training:
35
- - learning_rate: 2e-05
36
  - train_batch_size: 3
37
  - eval_batch_size: 3
38
  - seed: 42
39
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
40
  - lr_scheduler_type: linear
41
- - num_epochs: 1
42
  - mixed_precision_training: Native AMP
43
 
44
  ### Training results
45
 
46
  | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
47
  |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
48
- | No log | 1.0 | 201 | nan | 35.7316 | 14.1041 | 29.7672 | 29.7721 | 17.1045 |
 
49
 
50
 
51
  ### Framework versions
 
3
  base_model: google/flan-t5-small
4
  tags:
5
  - generated_from_trainer
6
+ metrics:
7
+ - rouge
8
  model-index:
9
  - name: flan-t5-small-finetuned-Coca
10
  results: []
 
16
  # flan-t5-small-finetuned-Coca
17
 
18
  This model is a fine-tuned version of [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) on the None dataset.
19
+ It achieves the following results on the evaluation set:
20
+ - Loss: nan
21
+ - Rouge1: 35.7982
22
+ - Rouge2: 14.0591
23
+ - Rougel: 29.6744
24
+ - Rougelsum: 29.7484
25
+ - Gen Len: 17.1045
26
 
27
  ## Model description
28
 
 
41
  ### Training hyperparameters
42
 
43
  The following hyperparameters were used during training:
44
+ - learning_rate: 1e-05
45
  - train_batch_size: 3
46
  - eval_batch_size: 3
47
  - seed: 42
48
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
49
  - lr_scheduler_type: linear
50
+ - num_epochs: 2
51
  - mixed_precision_training: Native AMP
52
 
53
  ### Training results
54
 
55
  | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
56
  |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
57
+ | No log | 1.0 | 201 | nan | 35.7982 | 14.0591 | 29.6744 | 29.7484 | 17.1045 |
58
+ | No log | 2.0 | 402 | nan | 35.7982 | 14.0591 | 29.6744 | 29.7484 | 17.1045 |
59
 
60
 
61
  ### Framework versions
runs/Nov17_00-49-00_9a95aa27ca08/events.out.tfevents.1700182202.9a95aa27ca08.3144.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:966c4bc5374120edadaa5c24505294c4c9ec59123d01ed65ceb0281999caba46
3
+ size 6683
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:2c483362e8d03a6082d9a6d4a520f72320d1baa61edab6566b22025efd0e0493
3
  size 4728
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:edf24a6deb83bd2061858a8d330c4219c6989df35866b49c092577ada9179c3c
3
  size 4728