a1noack commited on
Commit
be7c2d8
1 Parent(s): 8f8153d

add a bit more info to model card

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -5,10 +5,10 @@ license: mit
5
  thumbnail: https://en.wikipedia.org/wiki/Bart_Simpson#/media/File:Bart_Simpson_200px.png
6
  ---
7
  # BART for Gigaword
8
- - This model was created by fine-tuning the facebook/bart-large-cnn weights (also on HuggingFace) for the Gigaword dataset. The model was fine-tuned on the Gigaword training set for 3 epochs, and the model with the highest ROUGE-1 score on the training set batches was kept.
9
  - The BART Tokenizer for CNN-Dailymail was used in the fine-tuning process and that is the tokenizer that will be loaded automatically when doing:
10
  ```
11
  from transformers import AutoTokenizer
12
  tokenizer = AutoTokenizer.from_pretrained("a1noack/bart-large-gigaword")
13
  ```
14
- - This model achieves ROUGE-1 / ROUGE-2 / ROUGE-L of 37.28 / 18.58 / 34.53 on the Gigaword test set; this is pretty good when compared to the PEGASUS results of 39.74 / 20.52 / 36.93.
 
5
  thumbnail: https://en.wikipedia.org/wiki/Bart_Simpson#/media/File:Bart_Simpson_200px.png
6
  ---
7
  # BART for Gigaword
8
+ - This model was created by fine-tuning the `facebook/bart-large-cnn` weights (also on HuggingFace) for the Gigaword dataset. The model was fine-tuned on the Gigaword training set for 3 epochs, and the model with the highest ROUGE-1 score on the training set batches was kept.
9
  - The BART Tokenizer for CNN-Dailymail was used in the fine-tuning process and that is the tokenizer that will be loaded automatically when doing:
10
  ```
11
  from transformers import AutoTokenizer
12
  tokenizer = AutoTokenizer.from_pretrained("a1noack/bart-large-gigaword")
13
  ```
14
+ - This model achieves ROUGE-1 / ROUGE-2 / ROUGE-L of 37.28 / 18.58 / 34.53 on the Gigaword test set; this is pretty good when compared to PEGASUS results of 39.12 / 19.86 / 36.24.