Edit model card

Model Card for madlad400-3b-mt-4bit

馃毃 This model is a 4-bit quantized version of Google's madlad400-3b-mt using bitsandbytes. You can find the unquantized version of madlad400-3b-mt here.

Downloads last month
2
Safetensors
Model size
2.03B params
Tensor type
F32
FP16
U8
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.