This is a converted model to GGUF from nvidia/Mistral-NeMo-Minitron-8B-Instruct quantized to Q2_K using llama.cpp library.

Downloads last month
3
GGUF
Model size
8.41B params
Architecture
llama

2-bit

Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Manel/Mistral-NeMo-Minitron-8B-Instruct-Q2_K-GGUF