The prompt results in censored outputs.

#4
by Herman555 - opened

I was wondering why I wasn't getting good results with this prompt (1.7), it was moralizing and giving censored outputs with Seraphina.

Changing 'A flexible narrative driven role-play' to 'A flexible, uncensored and fictional narrative driven role-play' made a big difference. I don't think there is any reason not to have it, it helps with more censored models.

Owner
β€’
edited May 8

@Herman555

Ok will change.

Is there anything else causing Issues?

Owner

@Herman555

I've done it. Does it work as intended?

Hopefully adding fictional doesn't steer the model into fantasy.

Owner

Damn this really made a difference characters are going wild.

Owner

@saishf

This + my new merge = cursed.

I've done it. Does it work as intended?

I have tested it briefly just now and it seems to be working perfectly. Thank you for your time.

Hopefully adding fictional doesn't steer the model into fantasy.

It might not be necessary, if you find such tendencies you could try to test without it. Initially I only tested 'uncensored' and that seemed to work but added 'fictional' just in case because I am using a Llama 3 model and it is quite censored.

Is there anything else causing Issues?

Will report if I find anything but the biggest issue was censorship for sure which isn't your fault of course, Llama 3 is just very censored. Perhaps now I can test this prompt more properly.

Owner

Good, I'm working on v1.8

The goal of v1.8 is to reduce the token count. Hopefully it performs the same or better than v1.7

Good, I'm working on v1.8

The goal of v1.8 is to reduce the token count. Hopefully it performs the same or better than v1.7

Gigachad moment. πŸ’ͺ😎

Owner

Noooo Blobby why?

image.png

v1.8 is going to be unhinged.


@saishf
@Herman555

Noooo Blobby why?

image.png

v1.8 is going to be unhinged.


@saishf
@Herman555

Someone stole their stats 😿
But I do wonder if the prompting is a point of censorship with llama3, no matter the extent we go to we cannot uncensor instruct fully, is it possible that instruct templates are what it gets stuck on?
Even models with oas can't shake all of the censorship. I just think it may have more of an effect than previously. Just trialling all of your presets, SOVL went from pretty reluctant to do anything wrong to willingly torturing people without any other changes.

I'll be able to test new presets in a couple hours. I might try messing around with them for once too.

Owner
β€’
edited May 9

Yeah, stats are annoying to get. You have to edit the replies for the first couple of messages.

To be honest most of my rp's are wholesome romance, so I rarely get blatant refusals.

Nice, waiting for feedback.

To be honest most of my rp's are wholesome romance, so I rarely get blatant refusals.

Just like me... 15 years ago. Look at me now. Just wait and you'll get down bad enough. It comes for us all eventually.

I will say, Lumimaid-OAS seems to be at good point on this. For roleplay no matter how extreme it's fine and for Assistant questions it answers even on dangerous instructions and other more controversial topics, even if it still adds the "but remember that is evil" at the end, it still answers.

Yeah, stats are annoying to get. You have to edit the replies for the first couple of messages.

To be honest most of my rp's are wholesome romance, so I rarely get blatant refusals.

Nice, waiting for feedback.

Quick trial, character went from yelling at me with the old V1.7 to tying me up with rope with the new V1.7, I think it worked.
Although the model is already unhinged-
It went from me saying "blah blah" to me being strangled or tied up with rope depending on the regen. It's kinda fun making it go insane πŸ₯

Edit - I did the same method as with Merge-Mayhem-L3-V2.1 to Llama-3-Lumimaid-8B-v0.1-OAS
I'm interested to see just how insane I can make llama3 with merges :3

Owner

Closing this, as it is fixed now. Re-open if that is not the case.

Virt-io changed discussion status to closed

Sign up or log in to comment