zeroshot commited on
Commit
ada8379
1 Parent(s): d5d481c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +31 -0
README.md CHANGED
@@ -1,3 +1,34 @@
1
  ---
 
 
2
  license: mit
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ tags:
3
+ - sparse sparsity quantized onnx embeddings int8
4
  license: mit
5
+ language:
6
+ - en
7
  ---
8
+
9
+ # gte-small-quant
10
+
11
+ This is the quantized (INT8) ONNX variant of the [gte-small](https://huggingface.co/thenlper/gte-small) embeddings model created with [DeepSparse Optimum](https://github.com/neuralmagic/optimum-deepsparse) for ONNX export/inference and Neural Magic's [Sparsify](https://github.com/neuralmagic/sparsify) for one-shot quantization.
12
+
13
+ Current list of sparse and quantized gte-small ONNX models:
14
+
15
+ | Links | Sparsification Method |
16
+ | --------------------------------------------------------------------------------------------------- | ---------------------- |
17
+ | [zeroshot/bge-large-en-v1.5-sparse](https://huggingface.co/zeroshot/gte-small-sparse) | Quantization (INT8) & 50% Pruning |
18
+ | [zeroshot/bge-large-en-v1.5-quant](https://huggingface.co/zeroshot/gte-small quant) | Quantization (INT8) |
19
+
20
+ BGE models using this architecture:
21
+
22
+ | Links | Sparsification Method |
23
+ | --------------------------------------------------------------------------------------------------- | ---------------------- |
24
+ | [zeroshot/bge-large-en-v1.5-sparse](https://huggingface.co/zeroshot/bge-large-en-v1.5-sparse) | Quantization (INT8) & 50% Pruning |
25
+ | [zeroshot/bge-large-en-v1.5-quant](https://huggingface.co/zeroshot/bge-large-en-v1.5-quant) | Quantization (INT8) |
26
+ | [zeroshot/bge-base-en-v1.5-sparse](https://huggingface.co/zeroshot/bge-base-en-v1.5-sparse) | Quantization (INT8) & 50% Pruning |
27
+ | [zeroshot/bge-base-en-v1.5-quant](https://huggingface.co/zeroshot/bge-base-en-v1.5-quant) | Quantization (INT8) |
28
+ | [zeroshot/bge-small-en-v1.5-sparse](https://huggingface.co/zeroshot/bge-small-en-v1.5-sparse) | Quantization (INT8) & 50% Pruning |
29
+ | [zeroshot/bge-small-en-v1.5-quant](https://huggingface.co/zeroshot/bge-small-en-v1.5-quant) | Quantization (INT8) |
30
+
31
+
32
+ For general questions on these models and sparsification methods, reach out to the engineering team on our [community Slack](https://join.slack.com/t/discuss-neuralmagic/shared_invite/zt-q1a1cnvo-YBoICSIw3L1dmQpjBeDurQ).
33
+
34
+ ![;)](https://media.giphy.com/media/bYg33GbNbNIVzSrr84/giphy-downsized-large.gif)