TheBloke commited on
Commit
cc1dcc7
1 Parent(s): 72a56a0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -34,9 +34,9 @@ To use the increased context with KoboldCpp, simply use `--contextsize` to set t
34
 
35
  ## Repositories available
36
 
37
- * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/WizardLM-7B-V1-0-SuperHOT-8K-GPTQ)
38
- * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU inference](https://huggingface.co/TheBloke/WizardLM-7B-V1-0-SuperHOT-8K-GGML)
39
- * [Unquantised SuperHOT fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/WizardLM-7B-V1-0-SuperHOT-8K-fp16)
40
  * [Unquantised base fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/ehartford/WizardLM-7B-V1.0-Uncensored)
41
 
42
  <!-- compatibility_ggml start -->
 
34
 
35
  ## Repositories available
36
 
37
+ * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/WizardLM-7B-V1-0-Uncensored-SuperHOT-8K-GPTQ)
38
+ * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU inference](https://huggingface.co/TheBloke/WizardLM-7B-V1-0-Uncensored-SuperHOT-8K-GGML)
39
+ * [Unquantised SuperHOT fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/WizardLM-7B-V1-0-Uncensored-SuperHOT-8K-fp16)
40
  * [Unquantised base fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/ehartford/WizardLM-7B-V1.0-Uncensored)
41
 
42
  <!-- compatibility_ggml start -->