ajibawa-2023 commited on
Commit
e84e626
1 Parent(s): e32175b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -23,8 +23,8 @@ All the credit goes to the Open-Orca team for releasing SlimOrca dataset.
23
 
24
 
25
  **Training:**
26
- Entire dataset was trained on Azure 4 x A100 80GB. For 3 epoch, training took almost 11 Days. DeepSpeed codebase was used for training purpose. This was trained on Llama-1 by Meta.
27
- Entire data was fine tuned on Llama-2 by Meta.
28
 
29
  This is a full fine tuned model. Links for quantized models are given below.
30
 
 
23
 
24
 
25
  **Training:**
26
+ Entire dataset was trained on Azure 4 x A100 80GB. For 3 epoch, training took almost 11 Days. DeepSpeed codebase was used for training purpose.
27
+ Entire data is trained on Llama-2 by Meta.
28
 
29
  This is a full fine tuned model. Links for quantized models are given below.
30