Edit model card

Minitron-4B-Base-FP8

FP8 quantized checkpoint of nvidia/Minitron-4B-Base for use with vLLM.

Downloads last month
3
Safetensors
Model size
4.19B params
Tensor type
BF16
·
F8_E4M3
·
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Quantized from

Collection including mgoin/Minitron-4B-Base-FP8