test_nsfw_2-Q8_0.gguf

test_nsfw_2-Q8_0.gguf and test_nsfw_2-BF16.gguf This is an upgraded model of the one below, pretty realistic and good with NSFW You need the clips, vae and CFG with this model

Images Created with this model on Civitai


This is a model for testing a distilled flux model, can produce an image at 12 steps and with a refiner, can make a high quality image. Works with flux loras and should work with sdxl loras. Do not use fp16, use fp8 as the fp16 will require 20GB of vram where the fp8 will need 12GB - feedback would be appreciated if you test this model

Downloads last month
82
GGUF
Model size
11.9B params
Architecture
flux

4-bit

6-bit

8-bit

16-bit

Inference API
Unable to determine this model's library. Check the docs .