Text Generation
Transformers
Safetensors
English
llama
text generation
instruct
text-generation-inference
4-bit precision
gptq
TheBloke commited on
Commit
daf98f2
1 Parent(s): 2c33685

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -10
README.md CHANGED
@@ -183,19 +183,10 @@ model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
183
  tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
184
 
185
  prompt = "Tell me about AI"
186
- prompt_template=f'''The model has been trained on prompts using three different roles, which are denoted by the following tokens: `<|system|>`, `<|user|>` and `<|model|>`.
187
-
188
- The `<|system|>` prompt can be used to inject out-of-channel information behind the scenes, while the `<|user|>` prompt should be used to indicate user input.
189
- The `<|model|>` token should then be used to indicate that the model should generate a response. These tokens can happen multiple times and be chained up to form a conversation history.
190
-
191
- The system prompt has been designed to allow the model to "enter" various modes and dictate the reply length. Here's an example:
192
-
193
- ```
194
- <|system|>Enter RP mode. Pretend to be {{char}} whose persona follows:
195
  {{persona}}
196
 
197
  You shall reply to the user while staying in character, and generate long responses.
198
- ```
199
 
200
  '''
201
 
 
183
  tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
184
 
185
  prompt = "Tell me about AI"
186
+ prompt_template=f'''<|system|>Enter RP mode. Pretend to be {{char}} whose persona follows:
 
 
 
 
 
 
 
 
187
  {{persona}}
188
 
189
  You shall reply to the user while staying in character, and generate long responses.
 
190
 
191
  '''
192