EdBerg commited on
Commit
94b2e37
1 Parent(s): 1eb8562

End of training

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -1,7 +1,7 @@
1
  ---
2
- base_model: yunconglong/Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B
3
  library_name: peft
4
- license: mit
5
  tags:
6
  - trl
7
  - sft
@@ -16,7 +16,7 @@ should probably proofread and complete it, then remove this comment. -->
16
 
17
  # Combined
18
 
19
- This model is a fine-tuned version of [yunconglong/Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B](https://huggingface.co/yunconglong/Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B) on an unknown dataset.
20
 
21
  ## Model description
22
 
@@ -44,7 +44,7 @@ The following hyperparameters were used during training:
44
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
  - lr_scheduler_type: cosine
46
  - lr_scheduler_warmup_ratio: 0.03
47
- - training_steps: 100
48
  - mixed_precision_training: Native AMP
49
 
50
  ### Training results
 
1
  ---
2
+ base_model: inceptionai/jais-adapted-13b-chat
3
  library_name: peft
4
+ license: apache-2.0
5
  tags:
6
  - trl
7
  - sft
 
16
 
17
  # Combined
18
 
19
+ This model is a fine-tuned version of [inceptionai/jais-adapted-13b-chat](https://huggingface.co/inceptionai/jais-adapted-13b-chat) on an unknown dataset.
20
 
21
  ## Model description
22
 
 
44
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
  - lr_scheduler_type: cosine
46
  - lr_scheduler_warmup_ratio: 0.03
47
+ - training_steps: 200
48
  - mixed_precision_training: Native AMP
49
 
50
  ### Training results