ajibawa-2023 commited on
Commit
e364d01
1 Parent(s): cb0ed39

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -19,8 +19,8 @@ It is trained on around 155000 set of conversations. Each set having 10~15 conve
19
  Publishing anything this model generates is the same as publishing it yourself. We are not responsible for what you generate using this model.
20
 
21
  **Training:**
22
- Entire dataset was trained on Azure 4 x A100 80GB. For 3 epoch, training took 77 hours. DeepSpeed codebase was used for training purpose. This was trained on Llama-1 by Meta.
23
- Llama-1 was used as it is very useful for Uncensored conversation.
24
 
25
  **GPTQ GGML & AWQ**
26
 
 
19
  Publishing anything this model generates is the same as publishing it yourself. We are not responsible for what you generate using this model.
20
 
21
  **Training:**
22
+ Entire dataset was trained on Azure 4 x A100 80GB. For 3 epoch, training took 77 hours. DeepSpeed codebase was used for training purpose. This was trained on Llama-2 by Meta.
23
+
24
 
25
  **GPTQ GGML & AWQ**
26