Theodore Hierath's picture
1 27

Theodore Hierath

thiera1
ยท

AI & ML interests

None yet

Recent Activity

liked a model about 1 month ago
bartowski/Ministral-8B-Instruct-2410-GGUF
liked a model about 1 month ago
mistralai/Ministral-8B-Instruct-2410
liked a model about 2 months ago
bartowski/EXAONE-3.5-7.8B-Instruct-GGUF
View all activity

Organizations

MLX Community's profile picture

thiera1's activity

reacted to inflatebot's post with ๐Ÿ‘ 6 months ago
view post
Post
3065
!!SEE UPDATE BELOW!!
I don't know who still needs to hear this, but if you're using Mistral Nemo-based models, you might have been using the wrong completions format. This is a signal boost from MarinaraSpaghetti's model card for NemoMix-Unleashed: MarinaraSpaghetti/NemoMix-Unleashed-12B
A lot of people have been working with a version of Nemo that's been reconfigured for ChatML, and while that works great, simply using the right format might be just as effective at correcting weirdness people in the AIRP scene sometimes have with Nemo.

Huge ups to Marinara for pointing this out, and to the MistralAI team member who let her know.

Update: A PR has been merged to SillyTavern Staging with new corrected templates! If you don't want to switch or wait, I put them up on GitHub: https://github.com/inflatebot/SillyTavern-Mistral-Templates

PRs for KoboldCPP's chat adapters and KoboldAI Lite *have been merged* and are coming in their respective releases (probably the next time KoboldCPP updates -- it didn't make it for 1.75.1, but you could just grab 'em from the repo!)
  • 1 reply
ยท
reacted to bartowski's post with ๐Ÿ‘ 7 months ago
view post
Post
10086
So turns out I've been spreading a bit of misinformation when it comes to imatrix in llama.cpp

It starts true; imatrix runs the model against a corpus of text and tracks the activation of weights to determine which are most important

However what the quantization then does with that information is where I was wrong.

I think I made the accidental connection between imatrix and exllamav2's measuring, where ExLlamaV2 decides how many bits to assign to which weight depending on the goal BPW

Instead, what llama.cpp with imatrix does is it attempts to select a scale for a quantization block that most accurately returns the important weights to their original values, ie minimizing the dequantization error based on the importance of activations

The mildly surprising part is that it actually just does a relatively brute force search, it picks a bunch of scales and tries each and sees which one results in the minimum error for weights deemed important in the group

But yeah, turns out, the quantization scheme is always the same, it's just that the scaling has a bit more logic to it when you use imatrix

Huge shoutout to @compilade for helping me wrap my head around it - feel free to add/correct as well if I've messed something up
ยท