"model weight dtype torch.float8_e4m3fn, manual cast: torch.bfloat16" with ROCM6.0

#23
by 12letter - opened

got prompt
Using split attention in VAE
Using split attention in VAE
model weight dtype torch.float8_e4m3fn, manual cast: torch.bfloat16
model_type FLOW
/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/tokenization_utils_base.py:1601: FutureWarning: clean_up_tokenization_spaces was not set. It will be set to True by default. This behavior will be depracted in transformers v4.45, and will be then set to False by default. For more details check this issue: https://github.com/huggingface/transformers/issues/31884
warnings.warn(
Requested to load FluxClipModel_
Loading 1 new model
loaded completely 0.0 4777.53759765625 True
clip missing: ['text_projection.weight']
Requested to load Flux
Loading 1 new model
loaded partially 8950.470000000001 8936.710021972656 0

i came the same question

Please let me know if you got this issue.

And please share the code how to inference this fp8 flux dev model

Thanks

same

Sign up or log in to comment