Edit model card
Configuration Parsing Warning: In config.json: "quantization_config.bits" must be an integer

This is a 4.25bpw quantized version of Qwen/Qwen2.5-32B-Instruct made with exllamav2.

License

This model is available under the Apache 2.0 License.

Discord Server

Join our Discord server here.

Feeling Generous? 😊

Eager to buy me a cup of 2$ coffe or iced tea?πŸ΅β˜• Sure, here is the link: https://ko-fi.com/drnicefellow. Please add a note on which one you want me to drink?

Downloads last month
5
Inference API
Unable to determine this model's library. Check the docs .

Model tree for DrNicefellow/Qwen2.5-32B-Instruct-4.25bpw-exl2

Base model

Qwen/Qwen2.5-32B
Quantized
(63)
this model

Collection including DrNicefellow/Qwen2.5-32B-Instruct-4.25bpw-exl2