Text Generation
Transformers
Safetensors
mistral
openchat
C-RLFT
conversational
text-generation-inference
4-bit precision
awq
TheBloke commited on
Commit
cf0e9a3
1 Parent(s): d58f525

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -6
README.md CHANGED
@@ -19,7 +19,7 @@ model_creator: OpenChat
19
  model_name: Openchat 3.5 1210
20
  model_type: mistral
21
  pipeline_tag: text-generation
22
- prompt_template: 'GPT4 User: {prompt}<|end_of_turn|>GPT4 Assistant:
23
 
24
  '
25
  quantized_by: TheBloke
@@ -84,10 +84,10 @@ It is supported by:
84
  <!-- repositories-available end -->
85
 
86
  <!-- prompt-template start -->
87
- ## Prompt template: OpenChat
88
 
89
  ```
90
- GPT4 User: {prompt}<|end_of_turn|>GPT4 Assistant:
91
 
92
  ```
93
 
@@ -153,7 +153,7 @@ prompts = [
153
  "What is 291 - 150?",
154
  "How much wood would a woodchuck chuck if a woodchuck could chuck wood?",
155
  ]
156
- prompt_template=f'''GPT4 User: {prompt}<|end_of_turn|>GPT4 Assistant:
157
  '''
158
 
159
  prompts = [prompt_template.format(prompt=prompt) for prompt in prompts]
@@ -195,7 +195,7 @@ from huggingface_hub import InferenceClient
195
  endpoint_url = "https://your-endpoint-url-here"
196
 
197
  prompt = "Tell me about AI"
198
- prompt_template=f'''GPT4 User: {prompt}<|end_of_turn|>GPT4 Assistant:
199
  '''
200
 
201
  client = InferenceClient(endpoint_url)
@@ -258,7 +258,7 @@ model = AutoModelForCausalLM.from_pretrained(
258
  streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
259
 
260
  prompt = "Tell me about AI"
261
- prompt_template=f'''GPT4 User: {prompt}<|end_of_turn|>GPT4 Assistant:
262
  '''
263
 
264
  # Convert prompt to tokens
 
19
  model_name: Openchat 3.5 1210
20
  model_type: mistral
21
  pipeline_tag: text-generation
22
+ prompt_template: 'GPT4 Correct User: {prompt}<|end_of_turn|>GPT4 Correct Assistant:
23
 
24
  '
25
  quantized_by: TheBloke
 
84
  <!-- repositories-available end -->
85
 
86
  <!-- prompt-template start -->
87
+ ## Prompt template: OpenChat-Correct
88
 
89
  ```
90
+ GPT4 Correct User: {prompt}<|end_of_turn|>GPT4 Correct Assistant:
91
 
92
  ```
93
 
 
153
  "What is 291 - 150?",
154
  "How much wood would a woodchuck chuck if a woodchuck could chuck wood?",
155
  ]
156
+ prompt_template=f'''GPT4 Correct User: {prompt}<|end_of_turn|>GPT4 Correct Assistant:
157
  '''
158
 
159
  prompts = [prompt_template.format(prompt=prompt) for prompt in prompts]
 
195
  endpoint_url = "https://your-endpoint-url-here"
196
 
197
  prompt = "Tell me about AI"
198
+ prompt_template=f'''GPT4 Correct User: {prompt}<|end_of_turn|>GPT4 Correct Assistant:
199
  '''
200
 
201
  client = InferenceClient(endpoint_url)
 
258
  streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
259
 
260
  prompt = "Tell me about AI"
261
+ prompt_template=f'''GPT4 Correct User: {prompt}<|end_of_turn|>GPT4 Correct Assistant:
262
  '''
263
 
264
  # Convert prompt to tokens