TheBloke commited on
Commit
4fb1499
1 Parent(s): 43d112e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +15 -3
README.md CHANGED
@@ -40,10 +40,16 @@ Many thanks to William Beauchamp from [Chai](https://chai-research.com/) for pro
40
  * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/upstage-llama-30b-instruct-2048-GGML)
41
  * [Original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/upstage/llama-30b-instruct-2048)
42
 
43
- ## Prompt template: Unknown
44
 
45
  ```
 
 
 
 
46
  {prompt}
 
 
47
  ```
48
 
49
  ## Provided files
@@ -131,8 +137,14 @@ model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
131
  """
132
 
133
  prompt = "Tell me about AI"
134
- prompt_template=f'''{prompt}
135
- '''
 
 
 
 
 
 
136
 
137
  print("\n\n*** Generate:")
138
 
 
40
  * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/upstage-llama-30b-instruct-2048-GGML)
41
  * [Original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/upstage/llama-30b-instruct-2048)
42
 
43
+ ## Prompt template: Orca-Hashes
44
 
45
  ```
46
+ ### System:
47
+ {System}
48
+
49
+ ### User:
50
  {prompt}
51
+
52
+ ### Assistant:
53
  ```
54
 
55
  ## Provided files
 
137
  """
138
 
139
  prompt = "Tell me about AI"
140
+ system = "You are a helpful assistant"
141
+ prompt_template=f'''### System:
142
+ {system}
143
+
144
+ ### User:
145
+ {prompt}
146
+
147
+ ### Assistant:'''
148
 
149
  print("\n\n*** Generate:")
150