Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
Isonium
/
WhiteRabbitNeo-33B-v1-GGUF
like
7
GGUF
License:
deepseek (other)
Model card
Files
Files and versions
Community
1
Use this model
Edit model card
Working GGUF files for WhiteRabbitNeo-33B-v1 in Files
Downloads last month
170
GGUF
Model size
33.3B params
Architecture
llama
2-bit
Q2_K
3-bit
Q3_K_S
Q3_K_M
4-bit
Q4_K_S
Q4_0
Q4_K_M
5-bit
Q5_K_S
Q5_0
Q5_K_M
6-bit
Q6_K
8-bit
Q8_0
Inference API (serverless) has been turned off for this model.
Quantized from
WhiteRabbitNeo/WhiteRabbitNeo-33B-v1