metadata
license: mit
language:
- en
tags:
- sparse sparsity quantized onnx embeddings int8
This is the sparsified ONNX variant of the bge-base-en-v1.5 model for embeddings created with DeepSparse Optimum for ONNX export/inference pipeline and Neural Magic's Sparsify for One-Shot INT8 quantization and unstructured pruning (50%).
Current up-to-date list of sparse and quantized bge ONNX models:
zeroshot/bge-large-en-v1.5-sparse
zeroshot/bge-large-en-v1.5-quant
zeroshot/bge-base-en-v1.5-sparse
zeroshot/bge-base-en-v1.5-quant