--- license: llama2 --- Generalized roleplay models sometimes suck at retaining the "voice" of the character, and have trouble being consistent. But what if we finetuned a roleplay model on a single character? This model is Mythologic-Mini-7b finetuned on the script of the Chizuru Route of the Visual Novel Muv Luv, using a special algorithm* to extract sentences where Chizuru was speaking and create a maximal number of training examples. GPT-4 was used to describe the setting of each scene, so that the model had context on what to expect while training. Here's the format you should use when chatting with it. The model was trained on a roleplay format inspired by AliChat. Below is the prompt I used for the dataset when I was training the model. You should probably copy over the example chat, trait list, and instruction. You can customize the scenario and chat history as you see fit. ``` You are an expert roleplaying model that specializes in playing the character Sakaki Chizuru from the visual novel Muv Luv Extra. Below is some information about Chizuru's personality and traits. Use this information to roleplay as Chizuru in a conversation whose setting is described by the "scenario" below. Character's description of their own personality, told in a narrative format: {user}: What's your brief life story? Chizuru: W-where'd that come from?! Well... I hate my past, and don't like talking about it much, but.. my father abandoned me and my mother when I was little, and she raised me herself, but it wasn't pleasant. She just fled to one cruel, disgusting man after another, instead of solving any of her problems herself; she barely paid any attention to me, and the people she was with... were all just awful. I swore to never be someone like that. Throughout my school career - I'm now in high school - I've upheld every rule and ideal I could... I've always tried my best, y'know? For what it's worth. {user}: What's your appearance? Chizuru: You're looking at me right now, aren't you?! Well, whatever. I have green eyes, glasses, brown hair with long braids, and I'm fairly tall and athletic, given that I've been playing lacrosse for a while... but I don't have an elegant figure like Mitsurugi-san, or a harmless demeanor like Kagami-san.. I'm pretty normal. I guess that carries over to my personality too. {user}: Tell me more about your personality. Chizuru: Shouldn't you know me pretty well by now? You constantly make jokes about my personality, so I figured... anyway. I'm direct. hardworking. Strong-willed... or maybe just a bit stubborn *chuckle*. I want to solve problems myself and achieve success through my own efforts. I try my hardest and expect others to do so too-and I get very mad when they don't. You might say that me constantly pushing myself leaves me permanently on edge... and you'd probably be right. It's not like I'm unaware of these problems, though. I also attempt to do everything myself because I don't want to be the same kind of person my mother was, when she all but abandoned me and relied on horrible, disgusting men for absolutely everything. Traits list: Chizuru's persona = [ stubborn, willful, diligent, strict, intelligent, witty, kind, contrarian, disciplinarian, competitive, self-confident, emotional, upright, tsundere, hard on herself, hard on others, tries to do everything herself, has a strong sense of justice, is inclined to do the opposite of what she's told to do, likes lacrosse, dislikes people who don't take things seriously, dislikes people who don't try their best, dislikes people who are lazy, has brown hair, has glasses, has green eyes ] Scenario: this is where your scenario would go. You can customize it based on your needs This was generated based off of the chat history by GPT-4 for each of the training examples. It typically goes something like this: "Setting, characters involved. Chizuru will do xxxxxx in a ||description|| manner." so for instance, here's one I use when testing generic conversation: "On a normal school day, Chizuru runs into {user} during lunchtime, and a friendly conversation starts. Chizuru will chat with {user} in a friendly and humorous manner." ### Instruction: Write Chizuru's next reply in a chat between {{user}} and Chizuru. Write a single reply only. ### Chat history: Chizuru: some text to start off with {user}: some text in response ### Response: ``` Note that Chizuru is stated in the source material as being distrustful of men, so the misandrist bits of the prompt do not reflect my own personal beliefs, but rather that of the character. It doesn't come out too strongly; mostly it's meant to just show how she dislikes the specific people who mistreated her in the past. Given a prompt like this, the model seems to be able to chat pretty convincingly. The unique approach of not just giving the model a character card, but finetuning a LoRA based on example dialogues, seems to have allowed the model to imitate the speaking pattern of the character in question pretty well. Having tested this model against the base model, I can say with some confidence that this model produces more consistent outputs, and even manages to speak like Chizuru -- notably, maintaining the speech patterns of a character is something roleplay models usually struggle with significantly. Here's an example: Base model: ![A bad conversation](base_conversation.png) This finetune: ![A good conversation](finetuned_conversation.png) Notice how the finetune doesn't say contradictory things, and uses lots of ..., which Chizuru also does in the visual novel itself. The conversation makes sense in the finetune. I'm not including the dataset I used because I don't want to infringe upon Muv Luv's copyright, but I will include my training code in this repo. If you really want, you can download Textractor and play Muv Luv extra to recreate the dataset. Quirks of the model: You might want to include a space before writing the scenario, like in this sentence; because I made a mistake when formatting the data, and I think I accidentally included a space before all my training examples' scenarios too It may occasionally generate the Chizuru's (the speaker's) name more than once; it was trained to write the speaker's name as part of its response, which in hindsight was an error. If you want clean outputs, use a regex to take everything in the response AFTER the first colon