Very Nice Work, But It Can't Be Prompted To Tell Stories

#19
by Phil337 - opened

With some other Mistrals like Open Hermes 2, as well as Llama 2 13b AYT, I can prompt a story with a paragraph of instructions and it will in most cases follow it without creating blatant contradictions.

However, this model stubbornly sticks to the standard story telling format, such as suspense and happy endings, even when it blatantly contradicts the prompt, leading to absurd continuity errors than not even a young child would make.

For example, if prompted for a kid to get caught stealing a cookie other LLMs would simply say something like 'the door flung open'. However, this LLM keeps saying things like he heard foot steps coming up the hall as he looked at the plate of cookies, then moments later was startled and caught red-handed eating the cookies. And when I asked why, if he was supposed to get caught, you had him hear foot steps coming up the hall, Zephyr Beta says it's to build suspense.

A blatant contradiction like this happens with every one of the paragraph long story prompts I use to test LLMs with, and in every case because it's stubbornly sticking to pre-packaged story telling elements like suspense and happy endings. I know that this doesn't have to be the case because other LLMs are smart enough to avoid these contradictions (e.g. the door suddenly opened vs first hearing footsteps coming down the hall). And it's not like it can't comprehend the prompt because when I ask why hearing foot steps coming up the hall precludes getting caught it can explain why, then it will tell the story again while making the correction.

In short, prompting Zephyr Beta to tell a story turns into a battle against the pre-package story telling elements. Other than this, Zephyr Beta is great and did far better during my testing than Zephyr Alpha, which has the same story telling stubbornness, resulting in blatant contradictions not even human toddlers would make when following the prompted instructions.

Sign up or log in to comment