mobicham commited on
Commit
d647df1
1 Parent(s): 9db6f86

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -13,8 +13,8 @@ This 2-bit model achieves a 0.716 zero-shot top-1 accuracy on Imagenet, outperfo
13
  ### Basic Usage
14
  To run the model, install the HQQ library from https://github.com/mobiusml/hqq and use it as follows:
15
  ``` Python
16
- from hqq.models.vit_timm import ViTHQQ
17
- model = ViTHQQ.from_quantized("mobiuslabsgmbh/CLIP-ViT-H-14-laion2B-2bit_g16_s128-HQQ")
18
  ```
19
 
20
  ### Zero-Shot Classification
@@ -31,8 +31,8 @@ orig_model, _ , preprocess = open_clip.create_model_and_transforms('ViT-H-14', p
31
  tokenizer = open_clip.get_tokenizer('ViT-H-14')
32
  model_text = orig_model.encode_text
33
 
34
- from hqq.models.vit_timm import ViTHQQ
35
- model_visual = ViTHQQ.from_quantized("mobiuslabsgmbh/CLIP-ViT-H-14-laion2B-2bit_g16_s128-HQQ")
36
 
37
  ###############################################################
38
  #Add your own templates here, we provide simple ones below.
 
13
  ### Basic Usage
14
  To run the model, install the HQQ library from https://github.com/mobiusml/hqq and use it as follows:
15
  ``` Python
16
+ from hqq.engine.timm import HQQtimm
17
+ model = HQQtimm.from_quantized("mobiuslabsgmbh/CLIP-ViT-H-14-laion2B-2bit_g16_s128-HQQ")
18
  ```
19
 
20
  ### Zero-Shot Classification
 
31
  tokenizer = open_clip.get_tokenizer('ViT-H-14')
32
  model_text = orig_model.encode_text
33
 
34
+ from hqq.engine.timm import HQQtimm
35
+ model = HQQtimm.from_quantized("mobiuslabsgmbh/CLIP-ViT-H-14-laion2B-2bit_g16_s128-HQQ")
36
 
37
  ###############################################################
38
  #Add your own templates here, we provide simple ones below.