echarlaix HF staff commited on
Commit
01b6db8
1 Parent(s): 7e9a756

update loading instructions

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -36,9 +36,9 @@ tags:
36
  To load the quantized model, you can do as follows:
37
 
38
  ```python
39
- from optimum.intel.neural_compressor.quantization import IncQuantizedModelForSequenceClassification
40
 
41
- model = IncQuantizedModelForSequenceClassification.from_pretrained("Intel/albert-base-v2-MRPC-int8")
42
  ```
43
 
44
  #### Test result
@@ -46,4 +46,4 @@ model = IncQuantizedModelForSequenceClassification.from_pretrained("Intel/albert
46
  | |INT8|FP32|
47
  |---|:---:|:---:|
48
  | **Accuracy (eval-f1)** |0.9193|0.9263|
49
- | **Model size (MB)** |45.0|46.7|
 
36
  To load the quantized model, you can do as follows:
37
 
38
  ```python
39
+ from optimum.intel import INCModelForSequenceClassification
40
 
41
+ model = INCModelForSequenceClassification.from_pretrained("Intel/albert-base-v2-MRPC-int8")
42
  ```
43
 
44
  #### Test result
 
46
  | |INT8|FP32|
47
  |---|:---:|:---:|
48
  | **Accuracy (eval-f1)** |0.9193|0.9263|
49
+ | **Model size (MB)** |45.0|46.7|