Philip May commited on
Commit
643acc2
1 Parent(s): f14e901

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -63,7 +63,7 @@ This model is trained on the following datasets:
63
  | Model | rouge1 | rouge2 | rougeL | rougeLsum
64
  |-------|--------|--------|--------|----------
65
  | [ml6team/mt5-small-german-finetune-mlsum](https://huggingface.co/ml6team/mt5-small-german-finetune-mlsum) | 18.3607 | 5.3604 | 14.5456 | 16.1946
66
- | deutsche-telekom/mT5-small-sum-de-en-01 | 21.7336 | 7.2614 | 17.1323 | 19.3977
67
 
68
  ## Evaluation on CNN Daily English Test Set (no beams)
69
 
@@ -72,7 +72,7 @@ This model is trained on the following datasets:
72
  | [sshleifer/distilbart-xsum-12-6](https://huggingface.co/sshleifer/distilbart-xsum-12-6) | 26.7664 | 8.8243 | 18.3703 | 23.2614
73
  | [facebook/bart-large-xsum](https://huggingface.co/facebook/bart-large-xsum) | 28.5374 | 9.8565 | 19.4829 | 24.7364
74
  | [mrm8488/t5-base-finetuned-summarize-news](https://huggingface.co/mrm8488/t5-base-finetuned-summarize-news) | 37.576 | 14.7389 | 24.0254 | 34.4634
75
- | deutsche-telekom/mT5-small-sum-de-en-01 | 37.6339 | 16.5317 | 27.1418 | 34.9951
76
 
77
 
78
  ## Evaluation on Extreme Summarization (XSum) English Test Set (no beams)
@@ -81,7 +81,7 @@ This model is trained on the following datasets:
81
  |-------|--------|--------|--------|----------
82
  | [mrm8488/t5-base-finetuned-summarize-news](https://huggingface.co/mrm8488/t5-base-finetuned-summarize-news) | 18.6204 | 3.535 | 12.3997 | 15.2111
83
  | [facebook/bart-large-xsum](https://huggingface.co/facebook/bart-large-xsum) | 28.5374 | 9.8565 | 19.4829 | 24.7364
84
- | deutsche-telekom/mT5-small-sum-de-en-01 | 32.3416 | 10.6191 | 25.3799 | 25.3908
85
  | [sshleifer/distilbart-xsum-12-6](https://huggingface.co/sshleifer/distilbart-xsum-12-6) | 44.2553 ♣ | 21.4289 ♣ | 36.2639 ♣ | 36.2696 ♣
86
 
87
  ♣: These values seem to be unusually high. It could be that the test set was used in the training data.
63
  | Model | rouge1 | rouge2 | rougeL | rougeLsum
64
  |-------|--------|--------|--------|----------
65
  | [ml6team/mt5-small-german-finetune-mlsum](https://huggingface.co/ml6team/mt5-small-german-finetune-mlsum) | 18.3607 | 5.3604 | 14.5456 | 16.1946
66
+ | [deutsche-telekom/mT5-small-sum-de-en-01](https://huggingface.co/deutsche-telekom/mt5-small-sum-de-en-v1) | 21.7336 | 7.2614 | 17.1323 | 19.3977
67
 
68
  ## Evaluation on CNN Daily English Test Set (no beams)
69
 
72
  | [sshleifer/distilbart-xsum-12-6](https://huggingface.co/sshleifer/distilbart-xsum-12-6) | 26.7664 | 8.8243 | 18.3703 | 23.2614
73
  | [facebook/bart-large-xsum](https://huggingface.co/facebook/bart-large-xsum) | 28.5374 | 9.8565 | 19.4829 | 24.7364
74
  | [mrm8488/t5-base-finetuned-summarize-news](https://huggingface.co/mrm8488/t5-base-finetuned-summarize-news) | 37.576 | 14.7389 | 24.0254 | 34.4634
75
+ | [deutsche-telekom/mT5-small-sum-de-en-01](https://huggingface.co/deutsche-telekom/mt5-small-sum-de-en-v1) | 37.6339 | 16.5317 | 27.1418 | 34.9951
76
 
77
 
78
  ## Evaluation on Extreme Summarization (XSum) English Test Set (no beams)
81
  |-------|--------|--------|--------|----------
82
  | [mrm8488/t5-base-finetuned-summarize-news](https://huggingface.co/mrm8488/t5-base-finetuned-summarize-news) | 18.6204 | 3.535 | 12.3997 | 15.2111
83
  | [facebook/bart-large-xsum](https://huggingface.co/facebook/bart-large-xsum) | 28.5374 | 9.8565 | 19.4829 | 24.7364
84
+ | [deutsche-telekom/mT5-small-sum-de-en-01](https://huggingface.co/deutsche-telekom/mt5-small-sum-de-en-v1) | 32.3416 | 10.6191 | 25.3799 | 25.3908
85
  | [sshleifer/distilbart-xsum-12-6](https://huggingface.co/sshleifer/distilbart-xsum-12-6) | 44.2553 ♣ | 21.4289 ♣ | 36.2639 ♣ | 36.2696 ♣
86
 
87
  ♣: These values seem to be unusually high. It could be that the test set was used in the training data.