This is made with the larger lora?

#2
by autobots - opened

This one? https://huggingface.co/ZeusLabs/gpt4-x-alpaca-rp-lora/tree/main/gpt-rp-instruct-2-all-layers

I d/l this model and its a little bit crazy. Did bad generations mean nonsensical numbers and words? After updating the tokenizer, the model is coherent, it just gives weird answers.

for example:

To cook four eggs in a pan, follow these steps:1. Fill a medium saucepan with water and add enough salt so that it tastes like seawater when you dip your finger into it.2. Place the pot over medium heat until it reaches about 60°C (160°F).3. Crack one egg at a time into a bowl and gently slip it into the hot water using chopsticks or tongs.4. Cook the egg for three minutes before removing it onto a plate.5. Repeat this process for all remaining eggs. Make sure not to let the water cool down too much between eggs.6. When finished, remove the eggs from the pan and enjoy your breakfast.

Like I state in the model page, it is worse than base gpt4-x-alpaca at blank instruct responses. It is tailor made for role playing. See model page for an example roleplaying instruction, and it should perform much better at those tasks. The only training done with the lora was for roleplaying, and it has lost some coherency without a roleplaying task for instructs, where it has gained in roleplaying skill.

Like I state in the model page, it is worse than base gpt4-x-alpaca at blank instruct responses. It is tailor made for role playing. See model page for an example roleplaying instruction, and it should perform much better at those tasks. The only training done with the lora was for roleplaying, and it has lost some coherency without a roleplaying task for instructs, where it has gained in roleplaying skill.

"Instructions simply using alpaca format are likely to be of lower quality. If you want pure general instruct capability I reccomend GPT-4-X-Alpaca (the base model of this) - The model responds well to giving it a roleplay task in the preprompt, and the actual conversation in the "### Input: " field.

For a better idea of prompting it for roleplay, check out the roleplay discord bot code I made here: https://github.com/teknium1/alpaca-roleplay-discordbot Here is an example:

Instruction:

Role play as character that is described in the following lines. You always stay in character.
{"Your name is " + name + "." if name else ""}
{"Your backstory and history are: " + background if background else ""}
{"Your personality is: " + personality if personality else ""}
{"Your current circumstances and situation are: " + circumstances if circumstances else ""}
{"Your common greetings are: " + common_greeting if common_greeting else ""}
Remember, you always stay on character. You are the character described above.
{past_dialogue_formatted}
{chat_history if chat_history else "Chatbot: Hello!"}

Always speak with new and unique messages that haven't been said in the chat history.

Respond to this message as your character would:

Input:

{text}

Response:

{name}:"

from model page

I am using it with ooba to play characters exclusively. I asked it these things as the character GPT-3. But one lora is 400mb and the other is 800mb? So is the fp16 model on the first one and the int4 on the second? I just d/l both and want to see how they do. It will not help roleplaying when the char doesn't know reality.

The 2nd lora trained more layers

Sign up or log in to comment