infering by multi-model session but get wrong output

#8
by enlei - opened

loading ChatML-preset qwen2 in mutlti-model session,get wrong putput by inputing "what can you do?"

as shown in the image below

image.png

When you use this model in LM Studio - you need to use the included ChatML preset.
Then in Settings (Right hand side chat screen) Go to -> Model Initialization -> Flash Attention -> Turn it on

jklj077 changed discussion status to closed

Sign up or log in to comment