Join the conversation

Join the community of Machine Learners and AI enthusiasts.

Sign Up
onekq 
posted an update about 18 hours ago
Post
324
The majority of OneSQL downloads went to the lowest end (7B-GGUF). I didn't expect this at all. The accuracy of this variant is the lowest, as the tradeoff for its small size.

Like all LLMs, coding models hallucinate too. The wrong answers they give are only inches away from the right answers. In case of SQL, the code is not only presentable, but also executable, hence returning the wrong rows.

I'm clueless, and curious how users will deal with this.

Knowing modes from small to big, that is something I expect it to be so with 7B model. Models work well with most popular languages, SQL doesn't seem to be one. I am using Emacs Lisp, and only Deepseek so far provides good answers for it.

It is also understandable that users want to download 7B model, that is of course because of finances and resources. Majority of users have low end GPU cards, not high end. It is not easy to have 24 GB VRAM or even more than that, users may have 4 GB, 8 or 12 GB VRAM. That is why they download lower end models.

·

You can infer on CPU, but it will be very very slow 😕

This comment has been hidden
In this post