About
GGUF Quantizations of https://huggingface.co/01-ai/Yi-1.5-6B
Provided Quantizations
Link | Type |
---|---|
GGUF | Q2_K |
GGUF | Q3_K_S |
GGUF | Q3_K_M |
GGUF | Q3_K_L |
GGUF | Q4_0 |
GGUF | Q4_K_S |
GGUF | Q4_K_M |
GGUF | Q5_0 |
GGUF | Q5_K_S |
GGUF | Q5_K_M |
GGUF | Q6_K |
GGUF | Q8_0 |
In a circular citation, I borrowed the format of this file from https://huggingface.co/mradermacher/copy_of_wildjailbreak_13-GGUF.
- Downloads last month
- 46
Hardware compatibility
Log In
to view the estimation
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
HF Inference deployability: The model has no pipeline_tag.
Model tree for larenspear/Yi-1.5-6B-Chat-GGUF
Base model
01-ai/Yi-1.5-6B