antphb commited on
Commit
fa29bdd
1 Parent(s): 3d602ed

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +12 -3
README.md CHANGED
@@ -13,6 +13,13 @@ should probably proofread and complete it, then remove this comment. -->
13
  # DS-Chatbox-facebook-xglm-564M-V3
14
 
15
  This model is a fine-tuned version of [facebook/xglm-564M](https://huggingface.co/facebook/xglm-564M) on the None dataset.
 
 
 
 
 
 
 
16
 
17
  ## Model description
18
 
@@ -31,14 +38,16 @@ More information needed
31
  ### Training hyperparameters
32
 
33
  The following hyperparameters were used during training:
34
- - learning_rate: 0.0015
35
  - train_batch_size: 8
36
  - eval_batch_size: 8
37
  - seed: 42
 
 
38
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
39
  - lr_scheduler_type: cosine
40
- - lr_scheduler_warmup_steps: 1000
41
- - num_epochs: 1
42
 
43
  ### Framework versions
44
 
 
13
  # DS-Chatbox-facebook-xglm-564M-V3
14
 
15
  This model is a fine-tuned version of [facebook/xglm-564M](https://huggingface.co/facebook/xglm-564M) on the None dataset.
16
+ It achieves the following results on the evaluation set:
17
+ - eval_loss: 2.5064
18
+ - eval_runtime: 181.3835
19
+ - eval_samples_per_second: 39.099
20
+ - eval_steps_per_second: 4.89
21
+ - epoch: 1.69
22
+ - step: 3500
23
 
24
  ## Model description
25
 
 
38
  ### Training hyperparameters
39
 
40
  The following hyperparameters were used during training:
41
+ - learning_rate: 7.500000000000001e-05
42
  - train_batch_size: 8
43
  - eval_batch_size: 8
44
  - seed: 42
45
+ - gradient_accumulation_steps: 8
46
+ - total_train_batch_size: 64
47
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
48
  - lr_scheduler_type: cosine
49
+ - lr_scheduler_warmup_ratio: 0.2
50
+ - num_epochs: 5
51
 
52
  ### Framework versions
53