Edit model card

stable-diffusion-v-1-4-GGUF

Original Model

CompVis/stable-diffusion-v-1-4-original

Run with LlamaEdge

  • LlamaEdge version: coming soon

Quantized GGUF Models

Using formats of different precisions will yield results of varying quality.

f32 f16 q8_0 q5_0 q5_1 q4_0 q4_1
Downloads last month
432
GGUF
Model size
1.07B params
Architecture
undefined

4-bit

5-bit

8-bit

16-bit

32-bit

Inference API (serverless) has been turned off for this model.

Quantized from