echarlaix HF staff commited on
Commit
d5a43ec
1 Parent(s): b320f88

update loading instructions

Browse files
Files changed (1) hide show
  1. README.md +3 -2
README.md CHANGED
@@ -35,9 +35,10 @@ tags:
35
  To load the quantized model, you can do as follows:
36
 
37
  ```python
38
- from optimum.intel.neural_compressor.quantization import IncQuantizedModelForSequenceClassification
39
 
40
- model = IncQuantizedModelForSequenceClassification.from_pretrained("Intel/distilbert-base-uncased-MRPC-int8-static")
 
41
  ```
42
 
43
  #### Test result
 
35
  To load the quantized model, you can do as follows:
36
 
37
  ```python
38
+ from optimum.intel import INCModelForSequenceClassification
39
 
40
+ model_id = "Intel/distilbert-base-uncased-MRPC-int8-static"
41
+ model = INCModelForSequenceClassification.from_pretrained(model_id)
42
  ```
43
 
44
  #### Test result