Text Generation
Transformers
Safetensors
English
llama
text generation
instruct
text-generation-inference
4-bit precision
gptq
TheBloke commited on
Commit
c508417
1 Parent(s): fe11df7

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +67 -4
README.md CHANGED
@@ -1,11 +1,20 @@
1
  ---
 
 
2
  inference: false
 
 
3
  license: llama2
4
  model_creator: PygmalionAI
5
  model_link: https://huggingface.co/PygmalionAI/pygmalion-2-13b
6
  model_name: Pygmalion 2 13B
7
  model_type: llama
 
8
  quantized_by: TheBloke
 
 
 
 
9
  ---
10
 
11
  <!-- header start -->
@@ -53,6 +62,15 @@ The model has been trained on prompts using three different roles, which are den
53
  The `<|system|>` prompt can be used to inject out-of-channel information behind the scenes, while the `<|user|>` prompt should be used to indicate user input.
54
  The `<|model|>` token should then be used to indicate that the model should generate a response. These tokens can happen multiple times and be chained up to form a conversation history.
55
 
 
 
 
 
 
 
 
 
 
56
 
57
  <!-- prompt-template end -->
58
 
@@ -165,10 +183,10 @@ model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
165
  tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
166
 
167
  prompt = "Tell me about AI"
168
- prompt_template=f'''The model has been trained on prompts using three different roles, which are denoted by the following tokens: `<|system|>`, `<|user|>` and `<|model|>`.
 
169
 
170
- The `<|system|>` prompt can be used to inject out-of-channel information behind the scenes, while the `<|user|>` prompt should be used to indicate user input.
171
- The `<|model|>` token should then be used to indicate that the model should generate a response. These tokens can happen multiple times and be chained up to form a conversation history.
172
 
173
  '''
174
 
@@ -239,4 +257,49 @@ And thank you again to a16z for their generous grant.
239
 
240
  # Original model card: PygmalionAI's Pygmalion 2 13B
241
 
242
- No original model card was available.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ datasets:
3
+ - PygmalionAI/PIPPA
4
  inference: false
5
+ language:
6
+ - en
7
  license: llama2
8
  model_creator: PygmalionAI
9
  model_link: https://huggingface.co/PygmalionAI/pygmalion-2-13b
10
  model_name: Pygmalion 2 13B
11
  model_type: llama
12
+ pipeline_tag: text-generation
13
  quantized_by: TheBloke
14
+ tags:
15
+ - text generation
16
+ - instruct
17
+ thumbnail: null
18
  ---
19
 
20
  <!-- header start -->
 
62
  The `<|system|>` prompt can be used to inject out-of-channel information behind the scenes, while the `<|user|>` prompt should be used to indicate user input.
63
  The `<|model|>` token should then be used to indicate that the model should generate a response. These tokens can happen multiple times and be chained up to form a conversation history.
64
 
65
+ The system prompt has been designed to allow the model to "enter" various modes and dictate the reply length. Here's an example:
66
+
67
+ ```
68
+ <|system|>Enter RP mode. Pretend to be {{char}} whose persona follows:
69
+ {{persona}}
70
+
71
+ You shall reply to the user while staying in character, and generate long responses.
72
+ ```
73
+
74
 
75
  <!-- prompt-template end -->
76
 
 
183
  tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
184
 
185
  prompt = "Tell me about AI"
186
+ prompt_template=f'''<|system|>Enter RP mode. Pretend to be {{char}} whose persona follows:
187
+ {{persona}}
188
 
189
+ You shall reply to the user while staying in character, and generate long responses.
 
190
 
191
  '''
192
 
 
257
 
258
  # Original model card: PygmalionAI's Pygmalion 2 13B
259
 
260
+ <h1 style="text-align: center">Pygmalion-2 13B</h1>
261
+ <h2 style="text-align: center">An instruction-tuned Llama-2 biased towards fiction writing and conversation.</h2>
262
+
263
+ ## Model Details
264
+
265
+ The long-awaited release of our new models based on Llama-2 is finally here. Pygmalion-2 13B (formerly known as Metharme) is based on
266
+ [Llama-2 13B](https://huggingface.co/meta-llama/llama-2-13b-hf) released by Meta AI.
267
+
268
+ The Metharme models were an experiment to try and get a model that is usable for conversation, roleplaying and storywriting,
269
+ but which can be guided using natural language like other instruct models. After much deliberation, we reached the conclusion
270
+ that the Metharme prompting format is superior (and easier to use) compared to the classic Pygmalion.
271
+
272
+ This model was trained by doing supervised fine-tuning over a mixture of regular instruction data alongside roleplay, fictional stories
273
+ and conversations with synthetically generated instructions attached.
274
+
275
+ This model is freely available for both commercial and non-commercial use, as per the Llama-2 license.
276
+
277
+
278
+ ## Prompting
279
+
280
+ The model has been trained on prompts using three different roles, which are denoted by the following tokens: `<|system|>`, `<|user|>` and `<|model|>`.
281
+
282
+ The `<|system|>` prompt can be used to inject out-of-channel information behind the scenes, while the `<|user|>` prompt should be used to indicate user input.
283
+ The `<|model|>` token should then be used to indicate that the model should generate a response. These tokens can happen multiple times and be chained up to
284
+ form a conversation history.
285
+
286
+ ### Prompting example
287
+
288
+ The system prompt has been designed to allow the model to "enter" various modes and dictate the reply length. Here's an example:
289
+
290
+ ```
291
+ <|system|>Enter RP mode. Pretend to be {{char}} whose persona follows:
292
+ {{persona}}
293
+
294
+ You shall reply to the user while staying in character, and generate long responses.
295
+ ```
296
+
297
+ ## Dataset
298
+ The dataset used to fine-tune this model includes our own [PIPPA](https://huggingface.co/datasets/PygmalionAI/PIPPA), along with several other instruction
299
+ datasets, and datasets acquired from various RP forums.
300
+
301
+ ## Limitations and biases
302
+
303
+ The intended use-case for this model is fictional writing for entertainment purposes. Any other sort of usage is out of scope.
304
+
305
+ As such, it was **not** fine-tuned to be safe and harmless: the base model _and_ this fine-tune have been trained on data known to contain profanity and texts that are lewd or otherwise offensive. It may produce socially unacceptable or undesirable text, even if the prompt itself does not include anything explicitly offensive. Outputs might often be factually wrong or misleading.