johngiorgi commited on
Commit
3d0af6d
1 Parent(s): 6ea05fb

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +15 -19
README.md CHANGED
@@ -13,22 +13,18 @@ should probably proofread and complete it, then remove this comment. -->
13
 
14
  # Overview
15
 
16
- This model is a fine-tuned version of [allenai/led-base-16384](https://huggingface.co/allenai/led-base-16384) on the [allenai/mslr2022](https://huggingface.co/datasets/allenai/mslr2022) ms2 dataset.
17
  It achieves the following results on the evaluation set:
18
- - eval_loss: 3.7527
19
- - eval_rouge1_fmeasure_mean: 27.9314
20
- - eval_rouge2_fmeasure_mean: 9.4000
21
- - eval_rougeL_fmeasure_mean: 20.9302
22
- - eval_rougeLsum_fmeasure_mean: 23.6179
23
- - eval_bertscore_hashcode: microsoft/deberta-xlarge-mnli_L40_no-idf_version=0.3.11(hug_trans=4.21.0.dev0)-rescaled_fast-tokenizer
24
- - eval_bertscore_f1_mean: 23.5092
25
- - eval_seed: 42
26
- - eval_model_name_or_path: output/ms2/led-base/baseline
27
- - eval_doc_sep_token: </s>
28
- - eval_runtime: 820.6405
29
- - eval_samples_per_second: 2.463
30
- - eval_steps_per_second: 0.617
31
- - step: 0
32
 
33
  ## Model description
34
 
@@ -47,12 +43,12 @@ More information needed
47
  ### Training hyperparameters
48
 
49
  The following hyperparameters were used during training:
50
- - learning_rate: 2e-05
51
  - train_batch_size: 4
52
  - eval_batch_size: 4
53
  - seed: 42
54
- - gradient_accumulation_steps: 8
55
- - total_train_batch_size: 32
56
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
57
  - lr_scheduler_type: linear
58
  - lr_scheduler_warmup_ratio: 0.1
@@ -64,5 +60,5 @@ The following hyperparameters were used during training:
64
 
65
  - Transformers 4.21.0.dev0
66
  - Pytorch 1.10.0
67
- - Datasets 2.3.3.dev0
68
  - Tokenizers 0.12.1
13
 
14
  # Overview
15
 
16
+ This model is a fine-tuned version of [allenai/led-base-16384](https://huggingface.co/allenai/led-base-16384) on the allenai/mslr2022 ms2 dataset.
17
  It achieves the following results on the evaluation set:
18
+ - Loss: 3.7602
19
+ - Rouge1 Fmeasure Mean: 28.5338
20
+ - Rouge2 Fmeasure Mean: 9.5060
21
+ - Rougel Fmeasure Mean: 20.9321
22
+ - Rougelsum Fmeasure Mean: 24.0998
23
+ - Bertscore Hashcode: microsoft/deberta-xlarge-mnli_L40_no-idf_version=0.3.11(hug_trans=4.21.0.dev0)-rescaled_fast-tokenizer
24
+ - Bertscore F1 Mean: 22.7619
25
+ - Seed: 42
26
+ - Model Name Or Path: allenai/led-base-16384
27
+ - Doc Sep Token: </s>
 
 
 
 
28
 
29
  ## Model description
30
 
43
  ### Training hyperparameters
44
 
45
  The following hyperparameters were used during training:
46
+ - learning_rate: 3e-05
47
  - train_batch_size: 4
48
  - eval_batch_size: 4
49
  - seed: 42
50
+ - gradient_accumulation_steps: 4
51
+ - total_train_batch_size: 16
52
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
53
  - lr_scheduler_type: linear
54
  - lr_scheduler_warmup_ratio: 0.1
60
 
61
  - Transformers 4.21.0.dev0
62
  - Pytorch 1.10.0
63
+ - Datasets 2.4.0
64
  - Tokenizers 0.12.1