Do you have a quantized version of the model that works with sentence_transformers?

#5
by sungkim - opened

Do you plan to add a quantized version of the model that works with sentence_transformers?

Salesforce org

Hi @sungkim ,

We didn't plan to release the quantized version of the model because Hugging Face's model loading already incorporates quantization. To enable quantization to 4 bits, you can use the following code snippet:

import torch
from transformers import BitsAndBytesConfig, AutoModel

bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16
)

encoder = AutoModel.from_pretrained(
'Salesforce/SFR-Embedding-Mistral',
trust_remote_code=True,
device_map='auto',
torch_dtype=torch.bfloat16,
quantization_config=bnb_config
)

Hi! HF quants are really slow for production environments, but I have a question, it would be possible to quant to AWQ or GPTQ in order to run the model in TGI or VLLM for serving purposes? I can quant it to AWQ or GPTQ and pushed to the hub, but I need to kwnow its is compatible with that quant formats, regards!

Sign up or log in to comment