"This model also supports the following FIM tokens"

#1
by catarino - opened
This comment has been hidden

Can you give more info about how to apply this?

I'm using

<s>[INST]
{User}
[/INST]</s>
{Assistant}

without system prompt, because it wrecks inference.

Would love to know more how to use the system prompt and the FIM tokens. in prompt format, if possible.

LM Studio Community org
edited May 31

It's not well documented sadly, you also may have to re-download cause they forgot to include it in the originally uploaded tokenizer :)

From their github it looks like it should be:

<s>[SUFFIX]return a + b[PREFIX] def f(

which is definitely.. interesting..

Ah ok. Thank you @bartowski !

“They forgot to include it in the originally uploaded tokenizer”. This is also interesting albeit in an odd way 😬

LM Studio Community org

Yeah someone noticed that the FIM tokens evaluated to 0 so they uploaded a new tokenizer.model, this repo has the latest

Great.

regarding I downloaded Codestral-22B-v0.1-Q8_0.gguf from this repo yesterday. Should I download again?

btw, I noticed that I need to set layers to 50 (model has 56 max) to fix the issues on inference.
Having said that, this is the 1st model that I really feel is usable without awkward generations, endless repetitions or odd chars, etc.
Works great with both Pythagora and continue.dev.

Really happy for this.

Really thankful you for your work. :)

Sign up or log in to comment