Join the conversation

Join the community of Machine Learners and AI enthusiasts.

Sign Up
wenhuachΒ 
posted an update 13 days ago
Post
1789
AutoRound has demonstrated strong results even at 2-bit precision for VLM models like QWEN2-VL-72B. Check it out here: OPEA/Qwen2-VL-72B-Instruct-int2-sym-inc.

Thank you for the quantization the other day.πŸ˜€ Even 2-bit works properly...

I have a suggestion. I see quite a few people who have trouble with GPTQ and AWQ quantization because it takes time to quantize, and they don't have the environment to do it. But with Posts, many people miss it, and if we miss the timing, it's kind of hard to post, so it might be a good idea to have a permanent request page like mradermacher.
https://huggingface.co/mradermacher
https://huggingface.co/mradermacher/model_requests

Β·

Thank you for your suggestion. As our focus is on algorithm development and our computational resources are limited, we currently lack the bandwidth to support a large number of models. If you come across any models that would benefit from quantization, feel free to comment on any models under OPEA. We will make an effort to prioritize and quantize them if resources allow.