TheBloke commited on
Commit
6da85d8
1 Parent(s): f870fdb

Initial GPTQ model commit

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -47,10 +47,10 @@ Multiple GPTQ parameter permutations are provided; see Provided Files below for
47
  * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference (deprecated)](https://huggingface.co/TheBloke/CodeLlama-34B-GGML)
48
  * [Meta's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/codellama/CodeLlama-34b-hf)
49
 
50
- ## Prompt template: TBC
51
 
52
  ```
53
- Info on prompt template will be added shortly.
54
  ```
55
 
56
  ## Provided files and GPTQ parameters
@@ -159,7 +159,7 @@ model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
159
  """
160
 
161
  prompt = "Tell me about AI"
162
- prompt_template=f'''Info on prompt template will be added shortly.
163
  '''
164
 
165
  print("\n\n*** Generate:")
 
47
  * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference (deprecated)](https://huggingface.co/TheBloke/CodeLlama-34B-GGML)
48
  * [Meta's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/codellama/CodeLlama-34b-hf)
49
 
50
+ ## Prompt template: None
51
 
52
  ```
53
+ {prompt}
54
  ```
55
 
56
  ## Provided files and GPTQ parameters
 
159
  """
160
 
161
  prompt = "Tell me about AI"
162
+ prompt_template=f'''{prompt}
163
  '''
164
 
165
  print("\n\n*** Generate:")