Text Generation
Transformers
PyTorch
mistral
openchat
C-RLFT
conversational
Inference Endpoints
text-generation-inference

Great. But you need to filter out url sources and other gratuitious info.

#35
by Phil337 - opened

It not only goes on an on sometimes, but often repeats in endless loops.

For example, after finishing a response it will often start providing a source, which is always wrong by the way, so any url generation directives should be removed from the training data. Anyways, then it might add a warning about respecting anything remotely related to the top, such as minorities, women, celebrities..., then adds a few emojis, then ask if there is anything else, then repeat endlessly.

You need to filter out far more gratuitously additions like moralizing, url references, asking if there's anything else you want, adding emojis... otherwise the end token is never given priority. The alignment tax on this one is making it unusable.

In short, the most important things are respecting the users wishes, brevity and courtesy, and only in rare occassions (such as when asking how to make meth or steal a car) saying no. If something isn't remotely illegal or amoral (e.g. celebrity gossip) then stay out of it. No adult in real life lectures another adult like this. This isn't human alignment. This is brainless and unnatural AI moralizing. I don't care at all about celebrities. They're just test questions of mine because they test for things like this. And this LLM failed miserably. Don't use AI alignment to needless lecture users.

It was better a week ago. I don't know what they changed, but it never used to go on rambling tirades before.

Thanks for clarifying @JJJJJPSYCHIC , I just tried using it recently. I hope they're able to fix the issue. My guess is they just added one too many alignment add-ons until they buried the end token (e.g. please respect celebrities, here's a link, emojis, anything else I can help you with...).

Sign up or log in to comment