Any chance for models with lower quantization like q4_k_m (still usefull and better fit for mobile devices)?
· Sign up or log in to comment