philschmid HF staff commited on
Commit
6efcc54
1 Parent(s): 1c52a90

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +24 -8
README.md CHANGED
@@ -10,6 +10,24 @@ tags:
10
  - lora
11
  - t5
12
  - flan
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
13
  ---
14
 
15
  # FLAN-T5-XXL LoRA fine-tuned on `samsum`
@@ -21,12 +39,11 @@ PEFT tuned FLAN-T5 XXL model.
21
 
22
  This model is a fine-tuned version of [philschmid/flan-t5-xxl-sharded-fp16](https://huggingface.co/philschmid/flan-t5-xxl-sharded-fp16) on the samsum dataset.
23
  It achieves the following results on the evaluation set:
24
- - Loss: 1.3716
25
- - Rouge1: 47.2358
26
- - Rouge2: 23.5135
27
- - Rougel: 39.6266
28
- - Rougelsum: 43.3458
29
- - Gen Len: 17.3907
30
 
31
  -
32
 
@@ -81,5 +98,4 @@ The following hyperparameters were used during training:
81
  - Transformers 4.27.1
82
  - Pytorch 1.13.1+cu117
83
  - Datasets 2.9.1
84
- - PEFT@main
85
-
 
10
  - lora
11
  - t5
12
  - flan
13
+ metrics:
14
+ - rouge
15
+ model-index:
16
+ - name: flan-t5-xxl-samsum-peft
17
+ results:
18
+ - task:
19
+ name: Sequence-to-sequence Language Modeling
20
+ type: text2text-generation
21
+ dataset:
22
+ name: samsum
23
+ type: samsum
24
+ config: samsum
25
+ split: train
26
+ args: samsum
27
+ metrics:
28
+ - name: Rouge1
29
+ type: rouge
30
+ value: 50.386161
31
  ---
32
 
33
  # FLAN-T5-XXL LoRA fine-tuned on `samsum`
 
39
 
40
  This model is a fine-tuned version of [philschmid/flan-t5-xxl-sharded-fp16](https://huggingface.co/philschmid/flan-t5-xxl-sharded-fp16) on the samsum dataset.
41
  It achieves the following results on the evaluation set:
42
+
43
+ -rogue1: 50.386161%
44
+ -rouge2: 24.842412%
45
+ -rougeL: 41.370130%
46
+ -rougeLsum: 41.394230%
 
47
 
48
  -
49
 
 
98
  - Transformers 4.27.1
99
  - Pytorch 1.13.1+cu117
100
  - Datasets 2.9.1
101
+ - PEFT@main