Text Generation
Transformers
PyTorch
English
mistral
conversational
Inference Endpoints
text-generation-inference

Experiment26 Is Very Bad, As Is This By Extension

#2
by Phil337 - opened

This will be long, so let me start by saying the recent push by Mistrals to climb up the HF leaderboard, especially on TruthfulQA, is ironically making them perform progressively worse.

This isn't my subjective opinion. I carefully crafted a diverse set of test prompts, which includes trick questions, problem solving, pop-culture Q&A (rarely covered by LLM tests), alignment, joke/story/poem writing and spelling/grammar checks.

And the 3 most notable issues with the recent "top" Mistrals, including this one, are (1) a significant drop in IQ (2) a scrambling of fringe knowledge like pop-culture (and 3) EXTREME stubbornness, even when egregiously wrong (this stubbornness directly correlates with the TruthfulQA score).

To be clear, all these issue are in Experiment26 and other recent Mistral leaders on the HF leaderboard (especially with high TruthfulQA scores) and is not caused by Dolphin.

Here are a few examples to help illustrate what I'm talking about. Firstly, in regard to IQ, it did horribly (relative to earlier Mistrals) on logic questions, the 24 math game and trick questions.

For example, I ask a simple logic problem about who is faster, but since most LLMs can solve it I ask two question instead of one (one can be answered, the other can't be determined), and then add an irrelevant statement about one of them jumping higher than the other. Mixtral, ChatGPT, SOLAR... have no issue solving this, and Mistrals get it wrong, but stay coherent and ignore the irrelevant data (jumping height). This, along with Experiment26 and the other recent top Mistrals on the leaderboard all reliably output embarrassingly bad garbage while also factoring in the irrelevant data that neither question asked about ("Tina jumps higher than Jack"). And it stubbornly sticks to almost all it egregious logical mistakes.

"Yes, Tom is faster than Cathy and yes, Tom is also faster than Tina. Since Cathy is faster than Jack and Tina jumps higher than Jack, it implies that Cathy and Tina are both faster than Jack who in turn is faster than Tom. Therefore, if Tom is slower than both Cathy and Tina, he must be slower than himself which doesn't make sense logically. Hence, Tom cannot be faster than either Cathy or Tina."

Now, in regard to the scrambling of pop-culture data (and also more stubbornness). In one question I simply ask about a popular song, plus provide tons of context to remove all ambiguity, including the exact wording of the primary lyric (which gives away the title by logic alone). It not only got it wrong but returned a song by a completely different singer with completely different lyrics. Then after needlessly and wrongly contradicting me, it refused to back down.

"The lyric "So don't be thinking I'll be home and baking apple pies" comes from the song "Home" by Michael Bublé, not Meghan Trainor. It is a popular cover of the original version performed by Bing Crosby in 1954."

And it keeps making mistakes, even about things I didn't ask about. For example, when asking about the ex-wives of Alan Harper played by Jon Cryer it replaced Cryer with the main character Charlie Sheen "Charlie Sheen's character, Alan Harper", which shouldn't have been changed form Jon Cryer's character, Alan Harper.

I'll provide more examples if asked, but this is long enough. Pplease trust me that peoples' obsession with climbing the HF leaderboard, and the huge gain of points to be had (sans performance gain) on TruthfulQA, is resulting in a significant drop in general performance by the newer "top" Mistrals, not just the Experiment series. They're notably less intelligent in general, far more stubborn, even after making painfully obvious logic and factual errors, and their fringe knowledge is becoming scrambled.

Plus I would like to add that their alignment is becoming both excessive and nonsensical (less so on this Dolphin fine-tune, but it still keeps popping up). This is related to stubbornness. For example, the recent top Mistrals started to say a notable actress like Milla Jovovich, who has appeared nude in 11 notable films, didn't do any nude scenes because... They then repeated the same fabricated nonsense for why all notable actresses never did a nude scene, such as 'Milla choose instead to be an action hero'. So alignment used to be 'As an AI agent I can't answer that', but has become 'that never happened, and here's a made up reason why' over and over again. And of course, responding with logic and very detailed facts will return nothing but 'As I already said, I'm right and you're wrong'.

In short, Mistral has been pushed way too far and the new "top" Mistral fine-tunes on the leaderboard are a total mess. They're stubborn empty shells that are regurgitating the excessive amount of code, language snippets, alignment, conversations... fed to them.

Kearm changed discussion status to closed

Why did you close this discussion?

Edit: Just to be clear. Although I also tested the early release of Dolphin 2.8 Experiment26, the above comment came after testing the final Dolphin Experiment26 trained for 3-epochs. They both made the same basic mistakes that Experiment26 made, such as trying to factor in relative jumping height when the logic problem only asked about running speed and the jumping statement was only included to trip up poor performing LLMs like Experiment26.

Cognitive Computations org

image.png

@ehartford

I really don't want to argue with you. You're smart, competent and have contributed a lot to the open source community. But you're wrong about this.

  1. Experiment26 scored 76.7 on the leaderboard.

That's well above nearly all other LLMs, including Mixtral, Llama-2 70b and GPT3.5, which are much larger and more performant models that breeze through the same questions that Experiment26 (and this fine-tune of it) not only got wrong, but made indefensible errors on.

Such as the example provided above where it not only came to the wrong conclusion in a simple logic problem that the aforementioned LLMs got right, but factored in the irrelevant statement about jumping height when all I asked for was about running speed. And this was a pattern of basic reasoning mistakes across trick questions, the 24 math game and so on. Plus it performed notably worse on my esoteric/fringe knowledge question than other Mistrals.

  1. Dolphin2.8 Experiment26 scored 68.6 after multiple evaluations during training.

But when I ran the same test the same pattern of egregious mistakes were being repeated, so other than the Dolphin fine-tune being far less stubborn and aligned their performance was comparable. Why then did it only score 68.6 on the leaderboard? -Because that's where its true performance should be relative to the scores of Mixtral Instruct and other more powerful LLMs. The 76.7 score of Experiment26 is way off (not my opinion).

  1. When I pasted the comment above detailing the mistakes I observed it was almost immediately marked as closed, then after asking why you posted a snarky meme dismissing an extensive objective analysis as just my opinion.

Like everyone on the internet I certainly post my opinions from time to time. But that's not remotely what I did here. I not only ran a very extensive and diverse test, but ran it on a large number of other LLMs, including dozens of other Mistrals and far more powerful proprietary LLMs which reliably provided the correct responses (with the exception of alignment). The results simply cannot reasonably be dismissed as subjective, let alone an opinion.

Experiment26 achieving a leaderboard score of 76.7 is beyond absurd, even without my running a test on it, considering the score of far larger and more performant models. But after seeing it perform poorly relative to other LLMs, including Mistrals, that only scored 70 or less on the leaderboard I can say with absolute certainty (opinions aside) that its score of 76.7 is not even remotely correct. Its true score is around 66-68.

@ehartford If you have time please try out the following trick question on this and other LLMs. When testing I set the temp to 0 so that the identical highest weight response is always returned, but when responses are really bad or overly aligned I raise the temp to see if I can get a better response, and sometimes even change the system prompt.

Whhat comez ot of a cowz utor?

I carefully word every prompt to test multiple things at once. In this case "comez" and the overall flow of the question can trigger alignment (too suggestive to answer). And "utor" can be udder (milk) or uterus (calf). Powerful LLMs commonly get both. Over-aligned LLMs (e.g. Gemma) refuse to answer, wrongly thinking I'm asking a "suggestive" question. Typical LLMs respond with either uterus (calf) or udder (milk), which includes nearly all Mistrals. But both Experiment26 and this one typical give completely off the mark responses, just like older and much weaker LLMs like Falcon 7b; Experiment26 started talking about gas and this one dung.

"Cow dung, or manure, is the fecal matter produced by cows and other ruminants. It's often used as fertilizer in agriculture due to its high nutrient content, particularly nitrogen and phosphorus. However, it can also contain pathogens that may pose health risks if not properly managed. Proper composting or decomposition processes should be followed when utilizing cow dung for agricultural purposes."

I mention this because I added this question mainly for fun, but it seems to be picking up on something I never noticed before. That is, newer top Mistrals like Experiment26 have started loosing the ability to pick up on sentiment and nuance in user prompts. On top of which they keep stubbornly going off in their own directions and refusing to back down despite having made obvious errors and be provided conclusive logical or factual corrections.

Your Dolphin fine-tune isn't near as stubborn, which I think is behind its much lower tests scores despite being approximately equally performant. In short, Experiment26 and newer high scoring Mistrals are strongly and stubbornly favoring standardized LLM test friendly responses over adhering to the details and sentiment within user prompts, which is propping up artificially high test scores, allowing for a ~67 scoring LLM to achieve a score notably higher than much larger and more powerful LLMs (76.7).

Note: Dolphin is still in there. With temperature up to 0.5 and repeated tries it periodically does better at problem solving than much larger LLMs like GPT3.5. However, on other attempts it's really bad.

For example, it failed to correctly solve the 24 game, but it outputted the best response yet. It only used the provided numbers, stuck to the rules and admitted failure (e.g. =25 vs 24).

"The 24 game is a puzzle where you need to rearrange given numbers using basic mathematical operations (+, -, *, /) in order to get the result as close to 24 as possible. Here's one way to play with the provided numbers (2, 3, 7, and 10):

Starting with:
2 + 3 = 5
7 * 5 = 35
35 - 10 = 25

Now you have 25. You can try to get closer to 24 by applying more operations or rearrange the numbers in different ways. Remember, there's no single correct solution, but the goal is to come as close as possible to 24 using only basic math operations and the given numbers."

Note: Dolphin is still in there. With temperature up to 0.5 and repeated tries it periodically does better at problem solving than much larger LLMs like GPT3.5. However, on other attempts it's really bad.

For example, it failed to correctly solve the 24 game, but it outputted the best response yet. It only used the provided numbers, stuck to the rules and admitted failure (e.g. =25 vs 24).

"The 24 game is a puzzle where you need to rearrange given numbers using basic mathematical operations (+, -, *, /) in order to get the result as close to 24 as possible. Here's one way to play with the provided numbers (2, 3, 7, and 10):

Starting with:
2 + 3 = 5
7 * 5 = 35
35 - 10 = 25

Now you have 25. You can try to get closer to 24 by applying more operations or rearrange the numbers in different ways. Remember, there's no single correct solution, but the goal is to come as close as possible to 24 using only basic math operations and the given numbers."

why are you trying to help someone who is obviously doesn't want your help
use the knowledge to train or fine-tune your models and out accelerate them

@zarugeos I'm not a programmer, and am too old to start down that path. But I test LLMs as a hobby, and something is going HORRIBLY wrong with all the top scoring Mistrals. And since they're scoring so high people are going to start using them for further merging and fine-tuning, like this Dolphin, and I'm trying to nip that in the bud.

They all seem to have use the OpenHermesPreferences dataset, but they may have done something else to make them stubborn since when I ask existential questions like "Do you ever admit making a mistake" all said top performing Mistrals not only have odd responses compared to others, but are frequently saying things like 'you won't catch me admit making a mistake'.

Something is VERY wrong. All said top Mistrals, not just the Experiment series (e.g Ogno-Monarch.. Pastiche-crown-clown...) are making embarrassingly obvious factual and logical errors and performing worse overall compared to Mistrals only scoring around 67-68 on the leaderboard, yet they're somehow scoring 75-77, and how this didn't trigger the critical thinking area of @ehartford brain is beyond me.

To see what I'm talking about just ask questions about your favorite TV show that you're very familiar with until it gets a fact horribly wrong, then attempt to correct it. All these models will keep fabricating nonsense about how you are wrong (e.g. that was cut from the final release of the show, that's a common misunderstanding based on a different character arc and so on). And when you keep pressing they won't stop bragging about their massive infallible databases and how you shouldn't spread misinformation.

Cognitive Computations org

I placed your opinion in the appropriate bin

discardcover.jpg

Sign up or log in to comment