File size: 956 Bytes
c6d41d7 fa9514b c6d41d7 fa9514b c6d41d7 fa9514b c6d41d7 fa9514b c6d41d7 fa9514b c6d41d7 fa9514b c6d41d7 fa9514b |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 |
---
language:
- en
library_name: transformers
tags:
- bitsandbytes
license: apache-2.0
base_model:
- Qwen/QwQ-32B
---
## Model Details
This is [Qwen/QwQ-32B](https://huggingface.co/Qwen/QwQ-32B) quantized with [bitsandbytes](https://github.com/bitsandbytes-foundation/bitsandbytes) in 4-bit. The model has been created, tested, and evaluated by The Kaitchup.
The model is compatible with vLLM and Transformers.

Details on the quantization process and how to use the model here: [The Kaitchup](https://kaitchup.substack.com/)
- **Developed by:** [The Kaitchup](https://kaitchup.substack.com/)
- **Language(s) (NLP):** English
- **License:** Apache 2.0 license
## How to Support My Work
Subscribe to [The Kaitchup](https://kaitchup.substack.com/subscribe). This helps me a lot to continue quantizing and evaluating models for free. |