File size: 692 Bytes
86ce7f4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
---
license: mit
---

This is an INT4 quantized version of the `Phi-3.5-mini-instruct` model. The Python packages used in creating this model are as follows:
```
onnx==1.16.1
onnxruntime-directml==1.20.0
onnxruntime-genai-directml==0.4.0
torch==2.5.1
transformers==4.45.2
```
This quantized model is created using the following command:
```
python -m onnxruntime_genai.models.builder -m microsoft/Phi-3.5-mini-instruct -e dml -p int4 --extra_options {"int4_block_size"=128} -o ./Phi-3.5-mini-instruct_onnx_int4
```
`onnxruntime_genai.models.builder` quantizes the model using `MatMul4BitsQuantizer` from `matmul_4bits_quantizer.py` of `onnxruntime/quantization/` with the `"DEFAULT"` method.