justheuristic commited on
Commit
f0b8e84
1 Parent(s): 28cff49

reference phi3 medium

Browse files
Files changed (1) hide show
  1. README.md +2 -0
README.md CHANGED
@@ -23,6 +23,8 @@ Results:
23
  | microsoft/Phi-3-mini-4k-instruct| None | 0.5529 | 0.8325 | 0.6055 | 0.8020 | 0.7364 | 7.6 |
24
  | | 1x16 | 0.5051 | 0.7950 | 0.5532 | 0.7949 | 73.01 | 1.4 |
25
 
 
 
26
  The 1x16g16 (1-bit) models are on the way, as soon as we update the inference lib with their respective kernels.
27
 
28
  To learn more about the inference, as well as the information on how to quantize models yourself, please refer to the [official GitHub repo](https://github.com/Vahe1994/AQLM).
 
23
  | microsoft/Phi-3-mini-4k-instruct| None | 0.5529 | 0.8325 | 0.6055 | 0.8020 | 0.7364 | 7.6 |
24
  | | 1x16 | 0.5051 | 0.7950 | 0.5532 | 0.7949 | 73.01 | 1.4 |
25
 
26
+ You can also find Phi-3-medium models compressed with AQLM+PV: [2-bit](https://huggingface.co/ISTA-DASLab/Phi-3-medium-4k-instruct-AQLM-PV-2Bit-1x16-hf) and [1-bit](https://huggingface.co/ISTA-DASLab/Phi-3-medium-4k-instruct-AQLM-PV-1Bit-1x16-hf)
27
+
28
  The 1x16g16 (1-bit) models are on the way, as soon as we update the inference lib with their respective kernels.
29
 
30
  To learn more about the inference, as well as the information on how to quantize models yourself, please refer to the [official GitHub repo](https://github.com/Vahe1994/AQLM).