iuliaem commited on
Commit
efac3a8
1 Parent(s): f1e9374

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +24 -1
README.md CHANGED
@@ -6,4 +6,27 @@ tags:
6
 
7
  This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
8
  - Library: [More Information Needed]
9
- - Docs: [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6
 
7
  This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
8
  - Library: [More Information Needed]
9
+ - Docs: [More Information Needed]
10
+
11
+
12
+ done
13
+ Some non-default generation parameters are set in the model config. These should go into a GenerationConfig file (https://huggingface.co/docs/transformers/generation_strategies#save-a-custom-decoding-strategy-with-your-model) instead. This warning will be raised to an exception in v4.41.
14
+ Non-default generation parameters: {'max_length': 62, 'min_length': 11, 'early_stopping': True, 'num_beams': 6, 'no_repeat_ngram_size': 3, 'forced_eos_token_id': 2}
15
+ done
16
+ Some non-default generation parameters are set in the model config. These should go into a GenerationConfig file (https://huggingface.co/docs/transformers/generation_strategies#save-a-custom-decoding-strategy-with-your-model) instead. This warning will be raised to an exception in v4.41.
17
+ Non-default generation parameters: {'max_length': 62, 'min_length': 11, 'early_stopping': True, 'num_beams': 6, 'no_repeat_ngram_size': 3, 'forced_eos_token_id': 2}
18
+ done
19
+ Some non-default generation parameters are set in the model config. These should go into a GenerationConfig file (https://huggingface.co/docs/transformers/generation_strategies#save-a-custom-decoding-strategy-with-your-model) instead. This warning will be raised to an exception in v4.41.
20
+ Non-default generation parameters: {'max_length': 62, 'min_length': 11, 'early_stopping': True, 'num_beams': 6, 'no_repeat_ngram_size': 3, 'forced_eos_token_id': 2}
21
+ There were missing keys in the checkpoint model loaded: ['model.encoder.embed_tokens.weight', 'model.decoder.embed_tokens.weight', 'lm_head.weight'].
22
+ TrainOutput(global_step=25746, training_loss=2.2170493731102003, metrics={'train_runtime': 18355.3967, 'train_samples_per_second': 11.221, 'train_steps_per_second': 1.403, 'total_flos': 1.4870878331849933e+17, 'train_loss': 2.2170493731102003, 'epoch': 2.999825225750073})
23
+
24
+ [25746/25746 5:05:54, Epoch 2/3]
25
+ Epoch Training Loss Validation Loss Rouge1 Rouge2 Rougel Rougelsum Gen Len
26
+ 0 2.478600 4.186782 33.523800 10.600200 24.960500 26.367300 35.008900
27
+ 2 1.953600 4.762401 34.154400 11.034800 25.465400 26.935200 35.101500
28
+
29
+
30
+ [967/967 22:11]
31
+ done
32
+ {'eval_loss': 4.169002056121826, 'eval_rouge1': 33.2377, 'eval_rouge2': 10.4108, 'eval_rougeL': 24.8623, 'eval_rougeLsum': 26.2076, 'eval_gen_len': 35.0672, 'eval_runtime': 1349.1495, 'eval_samples_per_second': 2.867, 'eval_steps_per_second': 0.717, 'epoch': 2.999825225750073}