Update README.md
Browse files
README.md
CHANGED
|
@@ -32,6 +32,7 @@ It is designed for **fast CPU inference** with [ONNX Runtime](https://onnxruntim
|
|
| 32 |
| **Supported HW** | CPU (optimized for Intel AVX512-VNNI, fallback to AVX2) |
|
| 33 |
| **License** | Apache-2.0 |
|
| 34 |
|
|
|
|
| 35 |
|
| 36 |
## π Features
|
| 37 |
|
|
@@ -40,6 +41,8 @@ It is designed for **fast CPU inference** with [ONNX Runtime](https://onnxruntim
|
|
| 40 |
- π **Drop-in replacement** β embeddings compatible with the FP32 version.
|
| 41 |
- π **Multilingual** β supports Russian π·πΊ and English π¬π§.
|
| 42 |
|
|
|
|
|
|
|
| 43 |
## π§ Intended Use
|
| 44 |
|
| 45 |
**β
Recommended for:**
|
|
@@ -50,7 +53,9 @@ It is designed for **fast CPU inference** with [ONNX Runtime](https://onnxruntim
|
|
| 50 |
|
| 51 |
**β Not ideal for:**
|
| 52 |
- Absolute maximum accuracy scenarios (INT8 introduces minor loss)
|
| 53 |
-
- GPU-optimized pipelines (prefer FP16/FP32 models instead)
|
|
|
|
|
|
|
| 54 |
|
| 55 |
## βοΈ Pros & Cons of Quantized ONNX
|
| 56 |
|
|
@@ -64,6 +69,8 @@ It is designed for **fast CPU inference** with [ONNX Runtime](https://onnxruntim
|
|
| 64 |
- AVX512 optimizations only on modern Intel CPUs.
|
| 65 |
- No GPU acceleration in this export.
|
| 66 |
|
|
|
|
|
|
|
| 67 |
## π Benchmark
|
| 68 |
|
| 69 |
| Metric | Value |
|
|
@@ -75,6 +82,7 @@ It is designed for **fast CPU inference** with [ONNX Runtime](https://onnxruntim
|
|
| 75 |
| Inference speed | ~2Γ faster |
|
| 76 |
| Model size (MB) | 347.5 |
|
| 77 |
|
|
|
|
| 78 |
|
| 79 |
## π Files
|
| 80 |
|
|
@@ -84,6 +92,7 @@ tokenizer.json, vocab.txt, special_tokens_map.json β tokenizer
|
|
| 84 |
|
| 85 |
config.json β model config
|
| 86 |
|
|
|
|
| 87 |
|
| 88 |
## π§© Examples
|
| 89 |
|
|
|
|
| 32 |
| **Supported HW** | CPU (optimized for Intel AVX512-VNNI, fallback to AVX2) |
|
| 33 |
| **License** | Apache-2.0 |
|
| 34 |
|
| 35 |
+
---
|
| 36 |
|
| 37 |
## π Features
|
| 38 |
|
|
|
|
| 41 |
- π **Drop-in replacement** β embeddings compatible with the FP32 version.
|
| 42 |
- π **Multilingual** β supports Russian π·πΊ and English π¬π§.
|
| 43 |
|
| 44 |
+
---
|
| 45 |
+
|
| 46 |
## π§ Intended Use
|
| 47 |
|
| 48 |
**β
Recommended for:**
|
|
|
|
| 53 |
|
| 54 |
**β Not ideal for:**
|
| 55 |
- Absolute maximum accuracy scenarios (INT8 introduces minor loss)
|
| 56 |
+
- GPU-optimized pipelines (prefer FP16/FP32 models instead)
|
| 57 |
+
|
| 58 |
+
---
|
| 59 |
|
| 60 |
## βοΈ Pros & Cons of Quantized ONNX
|
| 61 |
|
|
|
|
| 69 |
- AVX512 optimizations only on modern Intel CPUs.
|
| 70 |
- No GPU acceleration in this export.
|
| 71 |
|
| 72 |
+
---
|
| 73 |
+
|
| 74 |
## π Benchmark
|
| 75 |
|
| 76 |
| Metric | Value |
|
|
|
|
| 82 |
| Inference speed | ~2Γ faster |
|
| 83 |
| Model size (MB) | 347.5 |
|
| 84 |
|
| 85 |
+
---
|
| 86 |
|
| 87 |
## π Files
|
| 88 |
|
|
|
|
| 92 |
|
| 93 |
config.json β model config
|
| 94 |
|
| 95 |
+
---
|
| 96 |
|
| 97 |
## π§© Examples
|
| 98 |
|