quite slow to load the fp8 model
6
#21 opened 10 days ago
by
gpt3eth
RuntimeError: "addmm_cuda" not implemented for 'Float8_e4m3fn'
1
#20 opened 10 days ago
by
gradient-diffusion
How to load into VRAM?
2
#19 opened 12 days ago
by
MicahV
What setting to use for flux1-dev-fp8
2
#18 opened 12 days ago
by
fullsoftwares
'float8_e4m3fn' attribute error
1
#17 opened 13 days ago
by
Magenta6
Loading flux-fp8 with diffusers
#16 opened 13 days ago
by
8au
FP8 Checkpoint version size mismatch?
2
#15 opened 13 days ago
by
Thireus
Can this model be used on Apple Silicon?
15
#14 opened 13 days ago
by
jsmidt
How to use fp8 models + original flux repo?
#13 opened 14 days ago
by
rolux
Quantization Method?
6
#7 opened 15 days ago
by
vyralsurfer
ComfyUi Workflow
1
#6 opened 15 days ago
by
Jebari
Can you make FP8 version of schnell as well please?
3
#5 opened 15 days ago
by
MonsterMMORPG
Diffusers?
3
#4 opened 16 days ago
by
tintwotin
Minimum vram requirements?
3
#3 opened 16 days ago
by
joachimsallstrom
FP16
1
#2 opened 16 days ago
by
bsbsbsbs112321
Metadata lost from model
4
#1 opened 16 days ago
by
mcmonkey