inference-optimization/Llama-3.1-8B-Instruct-Mixed-NVFP4-FP8_DYNAMIC-gate_up_proj-all 7B • Updated 3 days ago • 12
inference-optimization/Llama-3.1-8B-Instruct-Mixed-NVFP4-FP8_DYNAMIC-down_proj-all 6B • Updated 3 days ago • 12
inference-optimization/Llama-3.1-8B-Instruct-Mixed-NVFP4-FP8_DYNAMIC-qkv_proj-all 5B • Updated 3 days ago • 16
inference-optimization/Llama-3.1-8B-Instruct-Mixed-NVFP4-FP8_DYNAMIC-out_proj-all 5B • Updated 3 days ago • 8
inference-optimization/Llama-3.1-8B-Instruct-Mixed-NVFP4-FP8_BLOCK-gate_up_proj-all 7B • Updated 3 days ago • 16
inference-optimization/Llama-3.1-8B-Instruct-Mixed-NVFP4-FP8_BLOCK-down_proj-all 6B • Updated 3 days ago • 15
inference-optimization/Llama-3.1-8B-Instruct-Mixed-NVFP4-FP8_BLOCK-qkv_proj-all 5B • Updated 3 days ago • 14
inference-optimization/Llama-3.1-8B-Instruct-Mixed-NVFP4-FP8_BLOCK-out_proj-all 5B • Updated 3 days ago • 15
inference-optimization/Llama-3.3-70B-Instruct-FP8-dynamic-QKV-Cache-FP8-Per-Tensor 71B • Updated 3 days ago • 9
inference-optimization/Llama-3.3-70B-Instruct-FP8-dynamic-QKV-Cache-FP8-Per-Head 71B • Updated 3 days ago • 9
inference-optimization/Llama-3.1-8B-Instruct-FP8-dynamic-QKV-Cache-FP8-Per-Tensor 8B • Updated 3 days ago • 4
inference-optimization/Llama-3.1-8B-Instruct-FP8-dynamic-QKV-Cache-FP8-Per-Head 8B • Updated 3 days ago • 9