Text Generation
Transformers
Safetensors
mistral
openchat
C-RLFT
conversational
text-generation-inference
4-bit precision
awq
shhossain commited on
Commit
b216fcf
1 Parent(s): ee8f0cf

Removed unnecessary f in vllm prompt template

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -153,7 +153,7 @@ prompts = [
153
  "What is 291 - 150?",
154
  "How much wood would a woodchuck chuck if a woodchuck could chuck wood?",
155
  ]
156
- prompt_template=f'''GPT4 Correct User: {prompt}<|end_of_turn|>GPT4 Correct Assistant:
157
  '''
158
 
159
  prompts = [prompt_template.format(prompt=prompt) for prompt in prompts]
 
153
  "What is 291 - 150?",
154
  "How much wood would a woodchuck chuck if a woodchuck could chuck wood?",
155
  ]
156
+ prompt_template='''GPT4 Correct User: {prompt}<|end_of_turn|>GPT4 Correct Assistant:
157
  '''
158
 
159
  prompts = [prompt_template.format(prompt=prompt) for prompt in prompts]