Text Generation
Transformers
Safetensors
mistral
openchat
C-RLFT
conversational
text-generation-inference
4-bit precision
TheBloke commited on
Commit
6427822
1 Parent(s): b5504cb

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -5
README.md CHANGED
@@ -19,7 +19,7 @@ model_creator: OpenChat
19
  model_name: Openchat 3.5 1210
20
  model_type: mistral
21
  pipeline_tag: text-generation
22
- prompt_template: 'GPT4 User: {prompt}<|end_of_turn|>GPT4 Assistant:
23
 
24
  '
25
  quantized_by: TheBloke
@@ -71,10 +71,10 @@ These files were quantised using hardware kindly provided by [Massed Compute](ht
71
  <!-- repositories-available end -->
72
 
73
  <!-- prompt-template start -->
74
- ## Prompt template: OpenChat
75
 
76
  ```
77
- GPT4 User: {prompt}<|end_of_turn|>GPT4 Assistant:
78
 
79
  ```
80
 
@@ -246,7 +246,7 @@ from huggingface_hub import InferenceClient
246
  endpoint_url = "https://your-endpoint-url-here"
247
 
248
  prompt = "Tell me about AI"
249
- prompt_template=f'''GPT4 User: {prompt}<|end_of_turn|>GPT4 Assistant:
250
  '''
251
 
252
  client = InferenceClient(endpoint_url)
@@ -303,7 +303,7 @@ tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
303
 
304
  prompt = "Write a story about llamas"
305
  system_message = "You are a story writing assistant"
306
- prompt_template=f'''GPT4 User: {prompt}<|end_of_turn|>GPT4 Assistant:
307
  '''
308
 
309
  print("\n\n*** Generate:")
 
19
  model_name: Openchat 3.5 1210
20
  model_type: mistral
21
  pipeline_tag: text-generation
22
+ prompt_template: 'GPT4 Correct User: {prompt}<|end_of_turn|>GPT4 Correct Assistant:
23
 
24
  '
25
  quantized_by: TheBloke
 
71
  <!-- repositories-available end -->
72
 
73
  <!-- prompt-template start -->
74
+ ## Prompt template: OpenChat-Correct
75
 
76
  ```
77
+ GPT4 Correct User: {prompt}<|end_of_turn|>GPT4 Correct Assistant:
78
 
79
  ```
80
 
 
246
  endpoint_url = "https://your-endpoint-url-here"
247
 
248
  prompt = "Tell me about AI"
249
+ prompt_template=f'''GPT4 Correct User: {prompt}<|end_of_turn|>GPT4 Correct Assistant:
250
  '''
251
 
252
  client = InferenceClient(endpoint_url)
 
303
 
304
  prompt = "Write a story about llamas"
305
  system_message = "You are a story writing assistant"
306
+ prompt_template=f'''GPT4 Correct User: {prompt}<|end_of_turn|>GPT4 Correct Assistant:
307
  '''
308
 
309
  print("\n\n*** Generate:")