Update README.md
Browse files
README.md
CHANGED
|
@@ -9,18 +9,18 @@ This is the quantized (INT8) ONNX variant of the [bge-large-en-v1.5](https://hug
|
|
| 9 |
|
| 10 |
Model achieves 100% accuracy recovery on the STSB validation dataset vs. [dense ONNX variant](https://huggingface.co/zeroshot/bge-large-en-v1.5-dense).
|
| 11 |
|
| 12 |
-
Current
|
| 13 |
|
| 14 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 15 |
|
| 16 |
-
[zeroshot/bge-large-en-v1.5-quant](https://huggingface.co/zeroshot/bge-large-en-v1.5-quant)
|
| 17 |
|
| 18 |
-
[
|
| 19 |
|
| 20 |
-
[
|
| 21 |
-
|
| 22 |
-
[zeroshot/bge-small-en-v1.5-sparse](https://huggingface.co/zeroshot/bge-small-en-v1.5-sparse)
|
| 23 |
-
|
| 24 |
-
[zeroshot/bge-small-en-v1.5-quant](https://huggingface.co/zeroshot/bge-small-en-v1.5-quant)
|
| 25 |
-
|
| 26 |
-
For general questions on these models and sparsification methods, reach out to the engineering team on our [community Slack](https://join.slack.com/t/discuss-neuralmagic/shared_invite/zt-q1a1cnvo-YBoICSIw3L1dmQpjBeDurQ).
|
|
|
|
| 9 |
|
| 10 |
Model achieves 100% accuracy recovery on the STSB validation dataset vs. [dense ONNX variant](https://huggingface.co/zeroshot/bge-large-en-v1.5-dense).
|
| 11 |
|
| 12 |
+
Current list of sparse and quantized bge ONNX models:
|
| 13 |
|
| 14 |
+
| Links | Sparsification Method |
|
| 15 |
+
| --------------------------------------------------------------------------------------------------- | ---------------------- |
|
| 16 |
+
| [zeroshot/bge-large-en-v1.5-sparse](https://huggingface.co/zeroshot/bge-large-en-v1.5-sparse) | Quantization (INT8) & 50% Pruning |
|
| 17 |
+
| [zeroshot/bge-large-en-v1.5-quant](https://huggingface.co/zeroshot/bge-large-en-v1.5-quant) | Quantization (INT8) |
|
| 18 |
+
| [zeroshot/bge-base-en-v1.5-sparse](https://huggingface.co/zeroshot/bge-base-en-v1.5-sparse) | Quantization (INT8) & 50% Pruning |
|
| 19 |
+
| [zeroshot/bge-base-en-v1.5-quant](https://huggingface.co/zeroshot/bge-base-en-v1.5-quant) | Quantization (INT8) |
|
| 20 |
+
| [zeroshot/bge-small-en-v1.5-sparse](https://huggingface.co/zeroshot/bge-small-en-v1.5-sparse) | Quantization (INT8) & 50% Pruning |
|
| 21 |
+
| [zeroshot/bge-small-en-v1.5-quant](https://huggingface.co/zeroshot/bge-small-en-v1.5-quant) | Quantization (INT8) |
|
| 22 |
|
|
|
|
| 23 |
|
| 24 |
+
For general questions on these models and sparsification methods, reach out to the engineering team on our [community Slack](https://join.slack.com/t/discuss-neuralmagic/shared_invite/zt-q1a1cnvo-YBoICSIw3L1dmQpjBeDurQ).
|
| 25 |
|
| 26 |
+

|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|