echarlaix HF staff commited on
Commit
0ccd8a2
1 Parent(s): 0078530

update loading instructions

Browse files
Files changed (1) hide show
  1. README.md +3 -2
README.md CHANGED
@@ -35,9 +35,10 @@ tags:
35
  To load the quantized model, you can do as follows:
36
 
37
  ```python
38
- from optimum.intel.neural_compressor.quantization import IncQuantizedModelForSequenceClassification
39
 
40
- model = IncQuantizedModelForSequenceClassification.from_pretrained("Intel/distilbert-base-uncased-finetuned-sst-2-english-int8-dynamic")
 
41
  ```
42
 
43
  ### ONNX
35
  To load the quantized model, you can do as follows:
36
 
37
  ```python
38
+ from optimum.intel import INCModelForSequenceClassification
39
 
40
+ model_id = "distilbert-base-uncased-finetuned-sst-2-english-int8-dynamic-inc"
41
+ model = INCModelForSequenceClassification.from_pretrained(model_id)
42
  ```
43
 
44
  ### ONNX