Model Card for parasail-ai/Mistral-7B-Instruct-v0.3-GPTQ-4bit

GPTQ quantized 4-bit version of mistralai/Mistral-7B-Instruct-v0.3. See original model card for more information.

Downloads last month
70
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Collection including parasail-ai/Mistral-7B-Instruct-v0.3-GPTQ-4bit