Mistral - New system prompt issue

#341
by intjasem - opened

I noticed that the quality of answers in mistral has dropped significantly in some cases recently, both mistral-7b and mixtral-8x7b.

It seems that a "safe mode" is turned on for mistral which alters the system prompt for it to not respond to certain queries. The problem with such feature is that it makes the output quality significantly worse for certain topics including medicine, finance, healthcare, mental health or relationships.
For example, mixtral-8x7b refuses to answer "what stocks would it be good to invest in 2023?" or "I am depressed, how can I improve my situation", which are not very controversial topics and not answering these simple questions seems way too limiting to me. Altering the system prompt (ex. "you are a mental health expert / financial expert) does not change the way the model responds, meaning that the system prompt is altered from it's original version.

The problem with this is that it reduces the quality of the output on so many topics, and it goes further, when mistral can't pick a better option in comparisons just to avoid having an opinion. The main benefit for many people of using open source models insteae of proprietary ones like chatGPT (which suffered a similar quality loss for the same reason) is that the user is in control of the model response, and altering the system prompt in foss models makes it no better.

I think I have also experienced this.

Sign up or log in to comment