Update README.md
Browse files
README.md
CHANGED
@@ -6,7 +6,10 @@ tags:
|
|
6 |
---
|
7 |
|
8 |
# llama-13b-int4
|
9 |
-
This LoRA trained for 3 epochs and has been converted to int4 (4bit) via GPTQ method.
|
|
|
|
|
|
|
10 |
|
11 |
https://github.com/qwopqwop200/GPTQ-for-LLaMa
|
12 |
|
|
|
6 |
---
|
7 |
|
8 |
# llama-13b-int4
|
9 |
+
This LoRA trained for 3 epochs and has been converted to int4 (4bit) via GPTQ method.
|
10 |
+
|
11 |
+
Use the **safetensors** version of the model, the **pt** version is an old quantization that is no longer supported and will be removed in the future.
|
12 |
+
See the repo below for more info.
|
13 |
|
14 |
https://github.com/qwopqwop200/GPTQ-for-LLaMa
|
15 |
|