Censorship and dataset questions

#8
by SicariusSicariiStuff - opened

Hi, love your models, very cool.
From some pretty in depth tests I made for some reason I found this model WAY more censored than your previous V.1 version.
OFC some prompts engineering "fixes" this, but it still affects the model behavior everywhere else, for example in RP and the likes. That's also ok, not a criticism, just an observation, and I wonder why that is.

Also, the model is good at RP, but the only RP datasets I saw were from Pygmalion, and they are not so good, are you making your own RP datasets or is the RP capabilities are just the result of merging the model with other models.

Thank you again for your work,
Sicarius.

All my models are merges of other people's finetunes. I haven't trained any models using a dataset.

v1.5 is just v1.0 merged with Tess v1.6, so if it is more censored, it must have picked that up from Tess. I haven't encountered any refusals while using it, but I don't do exhaustive testing for that. Hopefully it's something you can work around with prompt engineering like you said.

ah, makes sense! I'm currently working in a base model that hopefully can be used by the community to create merges like you do, my current challenge is how to uncensor the model without lobotomizing its reasoning too much... anyways, I love your work and hopefully one day I'll even see you use one of my own models for interesting merges.

Cheers!
Sicarius.

I noticed this censored-like alignment (more to the good and safe side of things) is a 'trouble' in every Miku's finetune I have tried. And also the more lewd and toxic the fine-tune - the more dum-dum model will be, including prose quality.

Still, this is a very tasteful and fine finetune (not sorry for the pun, but it writes better than GPT4o). And I think it can be used to create synthetic datasets for the next finetunes to make you more architecture-independent.

But yeah, easier said than done. Cheers!

Sign up or log in to comment