TheBloke commited on
Commit
47122ee
1 Parent(s): b564638

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -1
README.md CHANGED
@@ -28,13 +28,14 @@ It is the result of quantising to 4bit using [AutoGPTQ](https://github.com/PanQi
28
  * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/CAMEL-33B-Combined-Data-GPTQ)
29
  * [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/baichuan-inc/baichuan-7B)
30
 
31
- ## Experimental first GPTQ, requires latest AutoGPTq code
32
 
33
  This is a first quantisation of a brand new model type.
34
 
35
  It will only work with AutoGPTQ, and only using the latest version of AutoGPTQ, compiled from source
36
 
37
  To merge this PR, please follow these steps to install the latest AutoGPTQ from source:
 
38
  **Linux**
39
  ```
40
  pip uninstall -y auto-gptq
 
28
  * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/CAMEL-33B-Combined-Data-GPTQ)
29
  * [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/baichuan-inc/baichuan-7B)
30
 
31
+ ## Experimental first GPTQ, requires latest AutoGPTQ code
32
 
33
  This is a first quantisation of a brand new model type.
34
 
35
  It will only work with AutoGPTQ, and only using the latest version of AutoGPTQ, compiled from source
36
 
37
  To merge this PR, please follow these steps to install the latest AutoGPTQ from source:
38
+
39
  **Linux**
40
  ```
41
  pip uninstall -y auto-gptq