RPGPT / README.md
jinymusim's picture
Update README.md
410ef2b
metadata
license: cc-by-sa-4.0
datasets:
  - chargoddard/rpguild
language:
  - en

RPGPT

GPT2 model trained on Role Playing datset.

Custom Tokens

The model containes 4 custom tokens to diffirentiate between Character, Context and Input data.
The Expected input to the model is therefore:

 "<|CHAR|>  Character Info <|CONTEXT|> Dialog or generation context <|INPUT|> User input"

The model is trained to include Response token to what we consider responce.
Meaning the model output will be:

 "<|CHAR|>  Character Info <|CONTEXT|> Dialog or generation context <|INPUT|> User input <|RESPONSE|> Model Response"

The actual output can be extracted by split function

 model_out = "<|CHAR|>  Character Info <|CONTEXT|> Dialog or generation context <|INPUT|> User input <|RESPONSE|> Model Response".split('<|RESPONSE|>')[-1]

Usage

For more easy use, cosider downloading scripts from my repo https://github.com/jinymusim/DialogSystem
Then use the included classes as follows.

from utils.dialog_model import DialogModel
from transformers import AutoTokenizer

model = DialogModel('jinymusim/RPGPT', resize_now=False)
tok = AutoTokenizer.from_pretrained('jinymusim/RPGPT')
tok.model_max_length = 1024

char_name ="James Smith"
bio="Age: 30, Gender: Male, Hobies: Training language models"
model.set_character(char_name, bio)

print(model.generate_self(tok)) # For Random generation
print(model.generate(tok, input("USER>").strip())) # For user input converasion 

Other wise use standard huggingface interface

from transformers import AutoTokenizer, AutoModelForCausalLM

model = AutoModelForCausalLM('jinymusim/RPGPT')
tok = AutoTokenizer.from_pretrained('jinymusim/RPGPT')
tok.model_max_length = 1024
char_name ="James Smith"
bio="Age: 30, Gender: Male, Hobies: Training language models"
context = []
input_ids = tok.encode(f"<|CHAR|> {char_name}, Bio: {bio} <|CONTEXT|> {' '.join(context} <|INPUT|> {input('USER>')}")

response_out = model.generate(input_ids,  
                                    max_new_tokens= 150,
                                    do_sample=True,
                                    top_k=50,
                                    early_stopping=True,
                                    eos_token_id=tokenizer.eos_token_id,
                                    pad_token_id=tokenizer.pad_token_id)

print(response_out)