Commit
·
d765749
1
Parent(s):
d364efb
Update README.md
Browse files
README.md
CHANGED
@@ -13,6 +13,8 @@ duplicated_from: openai/clip-vit-base-patch32
|
|
13 |
|
14 |
Disclaimer: The model card is taken and modified from the official CLIP repository, it can be found [here](https://github.com/openai/CLIP/blob/main/model-card.md).
|
15 |
|
|
|
|
|
16 |
## Model Details
|
17 |
|
18 |
The CLIP model was developed by researchers at OpenAI to learn about what contributes to robustness in computer vision tasks. The model was also developed to test the ability of models to generalize to arbitrary image classification tasks in a zero-shot manner. It was not developed for general model deployment - to deploy models like CLIP, researchers will first need to carefully study their capabilities in relation to the specific context they’re being deployed within.
|
|
|
13 |
|
14 |
Disclaimer: The model card is taken and modified from the official CLIP repository, it can be found [here](https://github.com/openai/CLIP/blob/main/model-card.md).
|
15 |
|
16 |
+
I mattmdjaga added a handler to this model which seemed to be needed when deploying via inference endpoints.
|
17 |
+
|
18 |
## Model Details
|
19 |
|
20 |
The CLIP model was developed by researchers at OpenAI to learn about what contributes to robustness in computer vision tasks. The model was also developed to test the ability of models to generalize to arbitrary image classification tasks in a zero-shot manner. It was not developed for general model deployment - to deploy models like CLIP, researchers will first need to carefully study their capabilities in relation to the specific context they’re being deployed within.
|