ajibawa-2023
commited on
Commit
•
a3fedd8
1
Parent(s):
f4285a4
Update README.md
Browse files
README.md
CHANGED
@@ -26,7 +26,7 @@ This model has enhanced coding capabilities besides other capabilities such as *
|
|
26 |
Entire model was trained on 4 x A100 80GB. For 2 epoch, training took **21 Days**. Fschat & DeepSpeed codebase was used for training purpose. This was trained on Llama-2 by Meta.
|
27 |
|
28 |
|
29 |
-
This is a full fine tuned model. Links for quantized models will updated soon.
|
30 |
|
31 |
|
32 |
**GPTQ, GGUF, AWQ & Exllama**
|
|
|
26 |
Entire model was trained on 4 x A100 80GB. For 2 epoch, training took **21 Days**. Fschat & DeepSpeed codebase was used for training purpose. This was trained on Llama-2 by Meta.
|
27 |
|
28 |
|
29 |
+
This is a full fine tuned model. Links for quantized models will be updated soon.
|
30 |
|
31 |
|
32 |
**GPTQ, GGUF, AWQ & Exllama**
|