File size: 1,471 Bytes
199ac13
 
 
48f0c88
199ac13
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
da862e1
199ac13
 
48f0c88
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
---
license: mit
---
This is the ONNX variant of the [bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) embeddings model created with the [DeepSparse Optimum](https://github.com/neuralmagic/optimum-deepsparse) integration.

For ONNX export, run:

```bash
pip install git+https://github.com/neuralmagic/optimum-deepsparse.git
```

```python
from optimum.deepsparse import DeepSparseModelForFeatureExtraction
from transformers.onnx.utils import get_preprocessor
from pathlib import Path

model_id = "BAAI/bge-base-en-v1.5"

# load model and convert to onnx
model = DeepSparseModelForFeatureExtraction.from_pretrained(model_id, export=True)
tokenizer = get_preprocessor(model_id)

# save onnx checkpoint and tokenizer
onnx_path = Path("bge-base-en-v1.5-dense")
model.save_pretrained(onnx_path)
tokenizer.save_pretrained(onnx_path)
```

Current up-to-date list of sparse and quantized bge ONNX models:

[zeroshot/bge-large-en-v1.5-sparse](https://huggingface.co/zeroshot/bge-large-en-v1.5-sparse)

[zeroshot/bge-large-en-v1.5-quant](https://huggingface.co/zeroshot/bge-large-en-v1.5-quant)

[zeroshot/bge-base-en-v1.5-sparse](https://huggingface.co/zeroshot/bge-base-en-v1.5-sparse)

[zeroshot/bge-base-en-v1.5-quant](https://huggingface.co/zeroshot/bge-base-en-v1.5-quant)

[zeroshot/bge-small-en-v1.5-sparse](https://huggingface.co/zeroshot/bge-small-en-v1.5-sparse)

[zeroshot/bge-small-en-v1.5-quant](https://huggingface.co/zeroshot/bge-small-en-v1.5-quant)