Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
calcuis
/
pony
like
2
Text-to-Image
GGUF
English
doi:10.57967/hf/3995
gguf-node
License:
apache-2.0
Model card
Files
Files and versions
Community
gguf quantized legacy models for anime
(additional test pack for gguf-node)
setup (in general)
run it straight (no installation needed way)
workflow
review
reference
gguf quantized legacy models for anime
(additional test pack for gguf-node)
Prompt
score_9, score_8_up, score_7_up, film grain, photo by fuji-proplus-ii film, raw picture of 20 years old woman in lingerie, portrait, deep blue sky, cloudy sky, outdoor, high key light, soft shadow, Fiery clouds, colored hair,
Negative Prompt
score_6, score_5, score_4, source_pony, (worst quality:1.2), (low quality:1.2), (normal quality:1.2), lowres, bad anatomy, bad hands, signature, watermarks, ugly, imperfect eyes, skewed eyes, unnatural face, unnatural body, error, extra limb, missing limbs, painting by bad-artist,
Prompt
drag it to browser <metadata> same descriptor to the 1st one; but different model (boleromix)
Prompt
drag it to browser <metadata> same descriptor to the 1st one; but different model (snow)
setup (in general)
drag gguf file(s) to diffusion_models folder (./ComfyUI/models/diffusion_models)
drag clip or encoder(s), i.e.,
g-clip
and
l-clip
, to text_encoders folder (./ComfyUI/models/text_encoders)
drag vae decoder(s), i.e.,
legacy-vae
, to vae folder (./ComfyUI/models/vae)
run it straight (no installation needed way)
get the comfy pack with the new gguf-node (
beta
)
run the .bat file in the main directory
workflow
drag any workflow json file to the activated browser; or
drag any generated output file (i.e., picture, video, etc.; which contains the workflow metadata) to the activated browser
example workflow json for the
safetensors
example workflow json for the
gguf
review
use tag/word(s) as input for more accurate results for those legacy models; not very convenient (compare to the recent models) at the very beginning
credits should be given to those contributors from civitai platform
good to run on old machines, i.e., 9xx series or before (legacy mode [--disable-cuda-malloc --lowvram] supported); compatible with the new gguf-node
reference
comfyui
comfyanonymous
gguf-node
beta
Downloads last month
674
GGUF
Model size
2.57B params
Architecture
sdxl
2-bit
Q2_K
Q2_K
Q2_K
Q2_K
Q2_K
Q2_K
Q2_K
3-bit
Q3_K_S
Q3_K_S
Q3_K_S
Q3_K_S
Q3_K_S
Q3_K_S
Q3_K_S
Q3_K_M
Q3_K_M
Q3_K_M
Q3_K_M
Q3_K_M
Q3_K_M
Q3_K_M
Q3_K_L
Q3_K_L
Q3_K_L
Q3_K_L
Q3_K_L
Q3_K_L
Q3_K_L
4-bit
Q4_K_S
Q4_K_S
Q4_K_S
Q4_K_S
Q4_K_S
Q4_K_S
Q4_K_S
Q4_0
Q4_1
Q4_0
Q4_1
Q4_0
Q4_1
Q4_0
Q4_1
Q4_0
Q4_1
Q4_0
Q4_1
Q4_0
Q4_1
Q4_K_M
Q4_K_M
Q4_K_M
Q4_K_M
Q4_K_M
Q4_K_M
Q4_K_M
5-bit
Q5_K_S
Q5_K_S
Q5_K_S
Q5_K_S
Q5_K_S
Q5_K_S
Q5_K_S
Q5_0
Q5_1
Q5_0
Q5_1
Q5_0
Q5_1
Q5_0
Q5_1
Q5_0
Q5_1
Q5_0
Q5_1
Q5_0
Q5_1
Q5_K_M
Q5_K_M
Q5_K_M
Q5_K_M
Q5_K_M
Q5_K_M
Q5_K_M
6-bit
Q6_K
Q6_K
Q6_K
Q6_K
Q6_K
Q6_K
Q6_K
8-bit
Q8_0
Q8_0
Q8_0
Q8_0
Q8_0
Q8_0
Q8_0
16-bit
F16
F16
F16
F16
F16
F16
F16
Inference Examples
Text-to-Image
Examples
Unable to determine this model's library. Check the
docs
.
Maximize