Update README.md
Browse files
README.md
CHANGED
@@ -9,7 +9,11 @@ tags:
|
|
9 |
## Model Details
|
10 |
The CLIP model was developed by researchers at OpenAI to learn about what contributes to robustness in computer vision tasks. The model was also developed to test the ability of models to generalize to arbitrary image classification tasks in a zero-shot manner. It was not developed for general model deployment - to deploy models like CLIP, researchers will first need to carefully study their capabilities in relation to the specific context they’re being deployed within.
|
11 |
|
12 |
-
This instance of the CLIP model is intended for loading in
|
|
|
|
|
|
|
|
|
13 |
|
14 |
### Model Date
|
15 |
January 2021
|
|
|
9 |
## Model Details
|
10 |
The CLIP model was developed by researchers at OpenAI to learn about what contributes to robustness in computer vision tasks. The model was also developed to test the ability of models to generalize to arbitrary image classification tasks in a zero-shot manner. It was not developed for general model deployment - to deploy models like CLIP, researchers will first need to carefully study their capabilities in relation to the specific context they’re being deployed within.
|
11 |
|
12 |
+
This instance of the CLIP model is intended for loading in
|
13 |
+
* `timm` (https://github.com/rwightman/pytorch-image-models) and
|
14 |
+
* `OpenCLIP` (https://github.com/mlfoundations/open_clip) libraries.
|
15 |
+
|
16 |
+
Please see https://huggingface.co/openai/clip-vit-base-patch16 for use in Hugging Face Transformers.
|
17 |
|
18 |
### Model Date
|
19 |
January 2021
|