Update README.md
Browse files
README.md
CHANGED
@@ -21,3 +21,22 @@ metrics:
|
|
21 |
|
22 |
- pre-trained: 0.4759327471256256
|
23 |
- fine-tuned: 0.9957262277603149
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
21 |
|
22 |
- pre-trained: 0.4759327471256256
|
23 |
- fine-tuned: 0.9957262277603149
|
24 |
+
|
25 |
+
## Usage
|
26 |
+
|
27 |
+
load vision model
|
28 |
+
|
29 |
+
```python
|
30 |
+
from transformers import CLIPVisionModel
|
31 |
+
|
32 |
+
vision_model = CLIPVisionModel.from_pretrained('tanganke/clip-vit-base-patch32_mnist')
|
33 |
+
```
|
34 |
+
|
35 |
+
substitute the vision encoder of clip
|
36 |
+
|
37 |
+
```python
|
38 |
+
from transformers import CLIPModel
|
39 |
+
|
40 |
+
clip_model = CLIPModel.from_pretrained("openai/clip-vit-base-patch32")
|
41 |
+
clip_model.vision_model.load_state_dict(vision_model.vision_model.state_dict())
|
42 |
+
```
|