Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
amd-shark
/
sdxl-quant-fp8
like
0
Follow
AMD SHARK
17
Model card
Files
Files and versions
Community
1
cfd94d7
sdxl-quant-fp8
4 contributors
History:
19 commits
nickfraser
Added models that are fully quantized with FP8.
cfd94d7
3 months ago
all_linear_sym_8_calib8
Fix names
4 months ago
all_sym_8_calib10
MI250 QKV fused and all layers sym, FP8 attention, guidance scale 8, calib steps 10
4 months ago
brevitas
updated quant_params with QKV fusion
5 months ago
linear_conv_fp8_sdpa_fp16_eq_bl
Added models that are fully quantized with FP8.
3 months ago
linear_conv_fp8_sdpa_fp16_no_eq_bl
Added models that are fully quantized with FP8.
3 months ago
linear_conv_fp8_sdpa_fp8_eq_bl
Added models that are fully quantized with FP8.
3 months ago
linear_conv_fp8_sdpa_fp8_no_eq_bl
Added models that are fully quantized with FP8.
3 months ago
.gitattributes
Safe
2.08 kB
Added models that are fully quantized with FP8.
3 months ago
attn.py
Safe
6.26 kB
Added SDPA math model & test
4 months ago
sdxl.json
Safe
2.19 MB
Upload sdxl.json with huggingface_hub
6 months ago
sdxl.safetensors
Safe
5.14 GB
LFS
Upload sdxl.safetensors with huggingface_hub
6 months ago
test_attn.py
Safe
1.29 kB
Added SDPA math model & test
4 months ago