Primer Primed

PrimeD

AI & ML interests

None yet

Recent Activity

Organizations

None yet

PrimeD's activity

New activity in bartowski/Monstral-123B-GGUF about 2 months ago

IQ3_M Quant?

3
#2 opened about 2 months ago by
PrimeD

Invalid Split File

6
#1 opened about 2 months ago by
PrimeD
New activity in Darkknight535/OpenCrystal-15B-L3-v3 4 months ago

[Feedback]

5
#1 opened 4 months ago by
Darkknight535
New activity in bullerwins/Reflection-Llama-3.1-70B-GGUF 4 months ago

Thanks for the Update

1
#1 opened 4 months ago by
PrimeD
reacted to mlabonne's post with ๐Ÿ‘ 6 months ago
view post
Post
17521
Large models are surprisingly bad storytellers.

I asked 8 LLMs to "Tell me a bedtime story about bears and waffles."

Claude 3.5 Sonnet and GPT-4o gave me the worst stories: no conflict, no moral, zero creativity.

In contrast, smaller models were quite creative and wrote stories involving talking waffle trees and bears ostracized for their love of waffles.

Here you can see a comparison between Claude 3.5 Sonnet and NeuralDaredevil-8B-abliterated. They both start with a family of bears but quickly diverge in terms of personality, conflict, etc.

I mapped it to the hero's journey to have some kind of framework. Prompt engineering can definitely help here, but it's still disappointing that the larger models don't create better stories right off the bat.

Do you know why smaller models outperform the frontier models here?
ยท