TheBloke commited on
Commit
b744d51
1 Parent(s): 6666239

Initial GPTQ model commit

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -44,10 +44,10 @@ Multiple GPTQ parameter permutations are provided; see Provided Files below for
44
  * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/OpenOrcaxOpenChat-Preview2-13B-GGML)
45
  * [Open-Orca's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Open-Orca/OpenOrcaxOpenChat-Preview2-13B)
46
 
47
- ## Prompt template: TBC
48
 
49
  ```
50
- Info on prompt template will be added shortly.
51
  ```
52
 
53
  ## Provided files
@@ -141,7 +141,7 @@ model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
141
  """
142
 
143
  prompt = "Tell me about AI"
144
- prompt_template=f'''Info on prompt template will be added shortly.
145
  '''
146
 
147
  print("\n\n*** Generate:")
 
44
  * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/OpenOrcaxOpenChat-Preview2-13B-GGML)
45
  * [Open-Orca's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Open-Orca/OpenOrcaxOpenChat-Preview2-13B)
46
 
47
+ ## Prompt template: OpenChat Llama2 V1
48
 
49
  ```
50
+ User: {prompt}<|end_of_turn|>Assistant:
51
  ```
52
 
53
  ## Provided files
 
141
  """
142
 
143
  prompt = "Tell me about AI"
144
+ prompt_template=f'''User: {prompt}<|end_of_turn|>Assistant:
145
  '''
146
 
147
  print("\n\n*** Generate:")