TheBloke commited on
Commit
ccd9d4e
1 Parent(s): c8d318e

Initial GPTQ model commit

Browse files
Files changed (1) hide show
  1. README.md +0 -8
README.md CHANGED
@@ -29,14 +29,6 @@ It is the result of quantising to 4bit using [GPTQ-for-LLaMa](https://github.com
29
  * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/CAMEL-33B-Combined-Data-GGML)
30
  * [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/camel-ai/CAMEL-33B-Combined-Data)
31
 
32
- ## Prompt template
33
-
34
- ```
35
- A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.
36
- USER: prompt
37
- ASSISTANT:
38
- ```
39
-
40
  ## How to easily download and use this model in text-generation-webui
41
 
42
  Please make sure you're using the latest version of text-generation-webui
 
29
  * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/CAMEL-33B-Combined-Data-GGML)
30
  * [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/camel-ai/CAMEL-33B-Combined-Data)
31
 
 
 
 
 
 
 
 
 
32
  ## How to easily download and use this model in text-generation-webui
33
 
34
  Please make sure you're using the latest version of text-generation-webui