ajibawa-2023 commited on
Commit
96a9fbe
1 Parent(s): 0ce3763

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -0
README.md CHANGED
@@ -22,6 +22,8 @@ It is trained on around 155000 set of conversations. Each set having 10~15 conve
22
  Entire dataset was trained on Azure 4 x A100 80GB. For 3 epoch, training took 28 hours. DeepSpeed codebase was used for training purpose. This was trained on Llama-1 by Meta.
23
  Llama-1 was used as it is very useful for Uncensored conversation.
24
 
 
 
25
  **GPTQ GGML & AWQ**
26
 
27
  GPTQ: [Link](https://huggingface.co/TheBloke/Uncensored-Jordan-7B-GPTQ)
 
22
  Entire dataset was trained on Azure 4 x A100 80GB. For 3 epoch, training took 28 hours. DeepSpeed codebase was used for training purpose. This was trained on Llama-1 by Meta.
23
  Llama-1 was used as it is very useful for Uncensored conversation.
24
 
25
+ This is a full fine tuned model. Links for quantized models are given below.
26
+
27
  **GPTQ GGML & AWQ**
28
 
29
  GPTQ: [Link](https://huggingface.co/TheBloke/Uncensored-Jordan-7B-GPTQ)