graphcore-rahult commited on
Commit
eaaf87b
1 Parent(s): ab1bb84

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -14,7 +14,7 @@ should probably proofread and complete it, then remove this comment. -->
14
 
15
  This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
16
  It achieves the following results on the evaluation set:
17
- - Loss: 6.8086
18
 
19
  ## Model description
20
 
@@ -39,8 +39,8 @@ The following hyperparameters were used during training:
39
  - seed: 42
40
  - distributed_type: IPU
41
  - gradient_accumulation_steps: 64
42
- - total_train_batch_size: 512
43
- - total_eval_batch_size: 20
44
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
  - lr_scheduler_type: linear
46
  - lr_scheduler_warmup_ratio: 0.1
 
14
 
15
  This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
16
  It achieves the following results on the evaluation set:
17
+ - Loss: 6.0977
18
 
19
  ## Model description
20
 
 
39
  - seed: 42
40
  - distributed_type: IPU
41
  - gradient_accumulation_steps: 64
42
+ - total_train_batch_size: 128
43
+ - total_eval_batch_size: 5
44
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
  - lr_scheduler_type: linear
46
  - lr_scheduler_warmup_ratio: 0.1