Fix q8 weights (use uint8 for q8; int8 produces poor results)
#18
by
Xenova
HF staff
- opened
onnx/model_quantized.onnx
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:6e038fd6fb27b41fbb62e6a7df9b60b57215db3958d14382221beaab78fbc1d4
|
3 |
+
size 1714133130
|