GGUF quants for https://huggingface.co/intfloat/e5-mistral-7b-instruct
NOTE: This is a text embedding model used for feature extraction.

Layers Context Template
32
32768
Instruct: {task_description}
Query: {query}
Downloads last month
117
GGUF
Model size
7.24B params
Architecture
llama

4-bit

8-bit

16-bit

Inference Examples
Inference API (serverless) does not yet support gguf models for this pipeline type.