Initial GPTQ model commit
Browse files
README.md
CHANGED
@@ -42,7 +42,7 @@ GGML versions are not yet provided, as there is not yet support for SuperHOT in
|
|
42 |
## Repositories available
|
43 |
|
44 |
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/Vicuna-13B-1.3.0-SuperHOT-8K-GPTQ)
|
45 |
-
* [Unquantised SuperHOT fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co
|
46 |
* [Unquantised base fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/lmsys/vicuna-13b-v1.3)
|
47 |
|
48 |
## How to easily download and use this model in text-generation-webui with ExLlama
|
|
|
42 |
## Repositories available
|
43 |
|
44 |
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/Vicuna-13B-1.3.0-SuperHOT-8K-GPTQ)
|
45 |
+
* [Unquantised SuperHOT fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/Vicuna-13B-1.3.0-SuperHOT-8K-fp16)
|
46 |
* [Unquantised base fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/lmsys/vicuna-13b-v1.3)
|
47 |
|
48 |
## How to easily download and use this model in text-generation-webui with ExLlama
|