Feedback

#6
by Szarka - opened

The original miqu censoring can be felt, like avoiding words and such. I'd recommend that you merge a miqu that's already finetuned like miqumaid, or something like that.

please, no undi-slop. LZLV, euryale, anything else.

@Szarka I'm impressed you found some words this model won't use. I'll give miqumaid another look. In my initial testing with it, it didn't merge as well and it struggled more with some logical tests, but that was before I tried the SLERP merge approach.

@jackboot What is undi-slop? Can you explain your comment?

@sophosympatheia I mean... it can use them but it doesn't bring them up on it's own if you know what i mean

I've seen the datasets Undi and crew used for these models (including the maids) and I'm not impressed. Lots of gpt-isms and low quality. The models aren't very intelligent but they are super geared towards easy NSFW so people like them. I think it's a mistake to merge them into this one. Uncensored isn't the only metric and as you see from your experience, they get dumb.

I have stacked miqu, senku, quartet against each other and yours is the least censored from the bunch. With a good system prompt, I hardly see any issues. Certainly not worth getting more shivers down your spine over.

@jackboot I like your style haha. Thanks for engaging with my models and actively sharing your thoughts.

@jackboot i see... In this case it's really not a good idea to merge it sorry. I agree if this is the case

Sign up or log in to comment