lewtun HF staff commited on
Commit
98b11ea
1 Parent(s): 3f3246b

Add evaluation results on xsum

Browse files

Beep boop, I am a bot from Hugging Face's automatic evaluation service! Your model has been evaluated on the [xsum](https://huggingface.co/datasets/xsum) dataset. Accept this pull request to see the results displayed on the [Hub leaderboard](https://huggingface.co/spaces/autoevaluate/leaderboards?dataset=xsum). Evaluate your model on more datasets [here](https://huggingface.co/spaces/autoevaluate/autoevaluate?dataset=xsum).

Files changed (1) hide show
  1. README.md +25 -0
README.md CHANGED
@@ -20,6 +20,31 @@ model-index:
20
  - name: Rouge1
21
  type: rouge
22
  value: 23.9405
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
23
  ---
24
 
25
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
20
  - name: Rouge1
21
  type: rouge
22
  value: 23.9405
23
+ - metrics:
24
+ - name: ROUGE-1
25
+ type: rouge
26
+ value: 18.0911
27
+ verified: true
28
+ - name: ROUGE-2
29
+ type: rouge
30
+ value: 3.3969
31
+ verified: true
32
+ - name: ROUGE-L
33
+ type: rouge
34
+ value: 14.3524
35
+ verified: true
36
+ - name: ROUGE-LSUM
37
+ type: rouge
38
+ value: 14.4776
39
+ verified: true
40
+ task:
41
+ type: summarization
42
+ name: Summarization
43
+ dataset:
44
+ name: xsum
45
+ type: xsum
46
+ config: default
47
+ split: test
48
  ---
49
 
50
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You