Text Generation
Transformers
PyTorch
Safetensors
English
llama
text generation
instruct
text-generation-inference

Add some snippets to the README

#7
Files changed (1) hide show
  1. README.md +33 -0
README.md CHANGED
@@ -32,6 +32,26 @@ and conversations with synthetically generated instructions attached.
32
 
33
  This model is freely available for both commercial and non-commercial use, as per the Llama-2 license.
34
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
35
 
36
  ## Prompting
37
 
@@ -52,6 +72,19 @@ The system prompt has been designed to allow the model to "enter" various modes
52
  You shall reply to the user while staying in character, and generate long responses.
53
  ```
54
 
 
 
 
 
 
 
 
 
 
 
 
 
 
55
  ## Dataset
56
  The dataset used to fine-tune this model includes our own [PIPPA](https://huggingface.co/datasets/PygmalionAI/PIPPA), along with several other instruction
57
  datasets, and datasets acquired from various RP forums.
 
32
 
33
  This model is freely available for both commercial and non-commercial use, as per the Llama-2 license.
34
 
35
+ ## Model Initialisation
36
+
37
+ One way to get started with the model is using HuggingFace's [transformers](https://huggingface.co/docs/transformers/index) library:
38
+
39
+ ```python
40
+ import torch
41
+ from transformers import AutoTokenizer, pipeline
42
+
43
+ # App config
44
+ modelName = "PygmalionAI/pygmalion-2-7b"
45
+
46
+ # Model Initialisation
47
+ tokenizer = AutoTokenizer.from_pretrained(modelName)
48
+ pipeline = pipeline(
49
+ "text-generation",
50
+ model=modelName,
51
+ torch_dtype=torch.float16,
52
+ device="cuda", # cuda on a compatible Nvidia GPU is recommended for running this model
53
+ )
54
+ ```
55
 
56
  ## Prompting
57
 
 
72
  You shall reply to the user while staying in character, and generate long responses.
73
  ```
74
 
75
+ Using the pipeline snippet above:
76
+
77
+ ```python
78
+ conversation_with_response = pipeline(
79
+ "Hi, can you tell me how cool Pygmalion models are?", # Use the tokens described above when prompting
80
+ do_sample=True,
81
+ top_k=10,
82
+ num_return_sequences=1,
83
+ eos_token_id=tokenizer.eos_token_id,
84
+ max_new_tokens=128
85
+ )
86
+ ```
87
+
88
  ## Dataset
89
  The dataset used to fine-tune this model includes our own [PIPPA](https://huggingface.co/datasets/PygmalionAI/PIPPA), along with several other instruction
90
  datasets, and datasets acquired from various RP forums.