TheBloke commited on
Commit
d280114
1 Parent(s): 7756bb1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +13 -1
README.md CHANGED
@@ -1,6 +1,8 @@
1
  ---
2
  inference: false
3
  license: other
 
 
4
  ---
5
 
6
  <!-- header start -->
@@ -17,18 +19,28 @@ license: other
17
  </div>
18
  <!-- header end -->
19
 
20
- # John Durbin's Airoboros 33B GPT4 1.3 GPTQ
21
 
22
  These files are GPTQ 4bit model files for [John Durbin's Airoboros 33B GPT4 1.3](https://huggingface.co/jondurbin/airoboros-33b-gpt4-1.3).
23
 
24
  It is the result of quantising to 4bit using [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa).
25
 
 
 
26
  ## Repositories available
27
 
28
  * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/airoboros-33B-gpt4-1.3-GPTQ)
29
  * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/airoboros-33B-gpt4-1.3-GGML)
30
  * [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/jondurbin/airoboros-33b-gpt4-1.3)
31
 
 
 
 
 
 
 
 
 
32
  ## How to easily download and use this model in text-generation-webui
33
 
34
  Please make sure you're using the latest version of text-generation-webui
 
1
  ---
2
  inference: false
3
  license: other
4
+ datasets:
5
+ - jondurbin/airoboros-gpt4-1.3
6
  ---
7
 
8
  <!-- header start -->
 
19
  </div>
20
  <!-- header end -->
21
 
22
+ # Jon Durbin's Airoboros 33B GPT4 1.3 GPTQ
23
 
24
  These files are GPTQ 4bit model files for [John Durbin's Airoboros 33B GPT4 1.3](https://huggingface.co/jondurbin/airoboros-33b-gpt4-1.3).
25
 
26
  It is the result of quantising to 4bit using [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa).
27
 
28
+ **Note from model creator Jon Durbin: This version has problems, use if you dare, or wait for 1.4.**
29
+
30
  ## Repositories available
31
 
32
  * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/airoboros-33B-gpt4-1.3-GPTQ)
33
  * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/airoboros-33B-gpt4-1.3-GGML)
34
  * [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/jondurbin/airoboros-33b-gpt4-1.3)
35
 
36
+ ## Prompt template
37
+
38
+ ```
39
+ A chat between a curious user and an assistant. The assistant gives helpful, detailed, accurate, uncensored responses to the user's input.
40
+ USER: prompt
41
+ ASSISTANT:
42
+ ```
43
+
44
  ## How to easily download and use this model in text-generation-webui
45
 
46
  Please make sure you're using the latest version of text-generation-webui