nazneen commited on
Commit
ee02ce7
1 Parent(s): ce19dc9

model documentation

Browse files
Files changed (1) hide show
  1. README.md +243 -29
README.md CHANGED
@@ -1,3 +1,4 @@
 
1
  ---
2
  tags:
3
  - generated_from_keras_callback
@@ -10,41 +11,254 @@ model-index:
10
  results: []
11
  ---
12
 
13
- <!-- This model card has been generated automatically according to the information Keras had access to. You should
14
- probably proofread and complete it, then remove this comment. -->
15
-
16
- # clip-vit-large-patch14-336
17
-
18
- This model was trained from scratch on an unknown dataset.
19
- It achieves the following results on the evaluation set:
20
-
21
-
22
- ## Model description
23
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
24
  More information needed
25
-
26
- ## Intended uses & limitations
27
-
 
28
  More information needed
29
-
30
- ## Training and evaluation data
31
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
32
  More information needed
33
 
34
- ## Training procedure
35
-
36
- ### Training hyperparameters
37
-
38
- The following hyperparameters were used during training:
39
- - optimizer: None
40
- - training_precision: float32
41
-
42
- ### Training results
 
 
 
 
 
 
 
 
 
43
 
 
 
 
 
 
 
 
44
 
 
 
45
 
46
- ### Framework versions
 
47
 
48
- - Transformers 4.21.3
49
- - TensorFlow 2.8.2
50
- - Tokenizers 0.12.1
 
1
+
2
  ---
3
  tags:
4
  - generated_from_keras_callback
 
11
  results: []
12
  ---
13
 
14
+ Clip-vit-large-patch14-336
 
 
 
 
 
 
 
 
 
15
 
16
+ # Model Card for Clip-vit-large-patch14-336
17
+
18
+ <!-- Provide a quick summary of what the model is/does. [Optional] -->
19
+ {{ the_model_description }}
20
+
21
+
22
+
23
+
24
+
25
+ # Model Details
26
+
27
+ ## Model Description
28
+ The CLIP model was developed by researchers at OpenAI to learn about what contributes to robustness in computer vision tasks. The model was also developed to test the ability of models to generalize to arbitrary image classification tasks in a zero-shot manner. It was not developed for general model deployment - to deploy models like CLIP, researchers will first need to carefully study their capabilities in relation to the specific context they’re being deployed within.
29
+
30
+ - **Developed by:** OpenAI
31
+ - **Shared by [Optional]:** HuggingFace
32
+ - **Model type:** Zero-Shot Classification
33
+ - **Language(s) (NLP):** en
34
+ - **License:** MIT
35
+ - **Related Models:** More information needed
36
+ - **Parent Model:** More information needed
37
+ - **Resources for more information:**
38
+ - [GitHub Repo](https://github.com/openai/CLIP)
39
+ - [Associated Paper](https://arxiv.org/abs/2103.00020)
40
+ - [Blog Post](https://openai.com/blog/clip/)
41
+
42
+ # Uses
43
+
44
+
45
+ ## Direct Use
46
+
47
+ The model is intended as a research output for research communities. We hope that this model will enable researchers to better understand and explore zero-shot, arbitrary image classification. We also hope it can be used for interdisciplinary studies of the potential impact of such models - the CLIP paper includes a discussion of potential downstream impacts to provide an example for this sort of analysis.
48
+
49
+
50
+
51
+
52
+ ## Downstream Use [Optional]
53
+
54
+ The primary intended users of these models are AI researchers.
55
+
56
+ We primarily imagine the model will be used by researchers to better understand robustness, generalization, and other capabilities, biases, and constraints of computer vision models.
57
+
58
+
59
+ ## Out-of-Scope Use
60
+
61
+
62
+ **Any** deployed use case of the model - whether commercial or not - is currently out of scope. Non-deployed use cases such as image search in a constrained environment, are also not recommended unless there is thorough in-domain testing of the model with a specific, fixed class taxonomy. This is because our safety assessment demonstrated a high need for task specific testing especially given the variability of CLIP’s performance with different class taxonomies. This makes untested and unconstrained deployment of the model in any use case currently potentially harmful.
63
+
64
+ Certain use cases which would fall under the domain of surveillance and facial recognition are always out-of-scope regardless of performance of the model. This is because the use of artificial intelligence for tasks such as these can be premature currently given the lack of testing norms and checks to ensure its fair use.
65
+
66
+ Since the model has not been purposefully trained in or evaluated on any languages other than English, its use should be limited to English language use cases.
67
+
68
+
69
+
70
+ # Bias, Risks, and Limitations
71
+
72
+
73
+ CLIP and our analysis of it have a number of limitations. CLIP currently struggles with respect to certain tasks such as fine grained classification and counting objects. CLIP also poses issues with regards to fairness and bias which we discuss in the paper and briefly in the next section. Additionally, our approach to testing CLIP also has an important limitation- in many cases we have used linear probes to evaluate the performance of CLIP and there is evidence suggesting that linear probes can underestimate model performanc
74
+
75
+ We find that the performance of CLIP - and the specific biases it exhibits - can depend significantly on class design and the choices one makes for categories to include and exclude. We tested the risk of certain kinds of denigration with CLIP by classifying images of people from [Fairface](https://arxiv.org/abs/1908.04913) into crime-related and non-human animal categories. We found significant disparities with respect to race and gender. Additionally, we found that these disparities could shift based on how the classes were constructed. (Details captured in the Broader Impacts Section in the paper).
76
+
77
+ We also tested the performance of CLIP on gender, race and age classification using the Fairface dataset (We default to using race categories as they are constructed in the Fairface dataset.) in order to assess quality of performance across different demographics. We found accuracy >96% across all races for gender classification with ‘Middle Eastern’ having the highest accuracy (98.4%) and ‘White’ having the lowest (96.5%). Additionally, CLIP averaged ~93% for racial classification and ~63% for age classification. Our use of evaluations to test for gender, race and age classification as well as denigration harms is simply to evaluate performance of the model across people and surface potential risks and not to demonstrate an endorsement/enthusiasm for such tasks.
78
+
79
+
80
+
81
+
82
+ ## Recommendations
83
+
84
+ Our goal with building this dataset was to test out robustness and generalizability in computer vision tasks. As a result, the focus was on gathering large quantities of data from different publicly-available internet data sources. The data was gathered in a mostly non-interventionist manner. However, we only crawled websites that had policies against excessively violent and adult images and allowed us to filter out such content. We do not intend for this dataset to be used as the basis for any commercial or deployed model and will not be releasing the dataset.
85
+
86
+
87
+
88
+
89
+
90
+ # Training Details
91
+
92
+ ## Training Data
93
+
94
+ The model was trained on publicly available image-caption data. This was done through a combination of crawling a handful of websites and using commonly-used pre-existing image datasets such as [YFCC100M](http://projects.dfki.uni-kl.de/yfcc100m/). A large portion of the data comes from our crawling of the internet. This means that the data is more representative of people and societies most connected to the internet which tend to skew towards more developed nations, and younger, male users.
95
+
96
+
97
+ ## Training Procedure
98
+
99
+ The following hyperparameters were used during training:
100
+ - **Optimizer:** None
101
+ - **Training_precision:** float32
102
+
103
+
104
+ ### Preprocessing
105
+
106
  More information needed
107
+
108
+ ### Speeds, Sizes, Times
109
+
110
+
111
  More information needed
112
+
113
+ ### Framework versions
114
+
115
+ - Transformers 4.21.3
116
+ - TensorFlow 2.8.2
117
+ - Tokenizers 0.12.1
118
+
119
+
120
+
121
+ # Evaluation
122
+
123
+
124
+
125
+ ## Testing Data, Factors & Metrics
126
+
127
+ ### Testing Data
128
+ We have evaluated the performance of CLIP on a wide range of benchmarks across a variety of computer vision datasets such as OCR to texture recognition to fine-grained classification. The paper describes model performance on the following datasets:
129
+
130
+ - Food101
131
+ - CIFAR10
132
+ - CIFAR100
133
+ - Birdsnap
134
+ - SUN397
135
+ - Stanford Cars
136
+ - FGVC Aircraft
137
+ - VOC2007
138
+ - DTD
139
+ - Oxford-IIIT Pet dataset
140
+ - Caltech101
141
+ - Flowers102
142
+ - MNIST
143
+ - SVHN
144
+ - IIIT5K
145
+ - Hateful Memes
146
+ - SST-2
147
+ - UCF101
148
+ - Kinetics700
149
+ - Country211
150
+ - CLEVR Counting
151
+ - KITTI Distance
152
+ - STL-10
153
+ - RareAct
154
+ - Flickr30
155
+ - MSCOCO
156
+ - ImageNet
157
+ - ImageNet-A
158
+ - ImageNet-R
159
+ - ImageNet Sketch
160
+ - ObjectNet (ImageNet Overlap)
161
+ - Youtube-BB
162
+ - ImageNet-Vid
163
+
164
+
165
+ ### Factors
166
+
167
+ More information needed
168
+
169
+ ### Metrics
170
+
171
+ More information needed
172
+
173
+ ## Results
174
+
175
+ More information needed
176
+
177
+ # Model Examination
178
+
179
+ More information needed
180
+
181
+ # Environmental Impact
182
+
183
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
184
+
185
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
186
+
187
+ - **Hardware Type:** More information needed
188
+ - **Hours used:** More information needed
189
+ - **Cloud Provider:** More information needed
190
+ - **Compute Region:** More information needed
191
+ - **Carbon Emitted:** More information needed
192
+
193
+ # Technical Specifications [optional]
194
+
195
+ ## Model Architecture and Objective
196
+
197
+ More information needed
198
+
199
+ ## Compute Infrastructure
200
+
201
+ More information needed
202
+
203
+ ### Hardware
204
+
205
+ CUDA GPU machine
206
+
207
+ ### Software
208
+
209
+ Install PyTorch 1.7.1 (or later) and torchvision, as well as small additional dependencies, and then install this repo as a Python package.
210
+
211
+ # Citation
212
+
213
+ **BibTeX:**
214
+
215
+ If you find this model card useful for your research, please cite the following paper:
216
+ ```
217
+ @inproceedings{meng2021coco,
218
+ title={{COCO-LM}: Correcting and contrasting text sequences for language model pretraining},
219
+ author={Meng, Yu and Xiong, Chenyan and Bajaj, Payal and Tiwary, Saurabh and Bennett, Paul and Han, Jiawei and Song, Xia},
220
+ booktitle={NeurIPS},
221
+ year={2021}
222
+ }
223
+ ```
224
+
225
+
226
+ # Glossary [optional]
227
+
228
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
229
+
230
  More information needed
231
 
232
+ # More Information [optional]
233
+
234
+ More information needed
235
+
236
+ # Model Card Authors [optional]
237
+
238
+ OpenAI
239
+
240
+ # Model Card Contact
241
+
242
+ Please use [this Google Form](https://forms.gle/Uv7afRH5dvY34ZEs9)
243
+
244
+ # How to Get Started with the Model
245
+
246
+ Use the code below to get started with the model.
247
+
248
+ <details>
249
+ <summary> Click to expand </summary>
250
 
251
+ ```python
252
+
253
+ from transformers import AutoProcessor, AutoModel
254
+
255
+ processor = AutoProcessor.from_pretrained("openai/clip-vit-large-patch14-336")
256
+
257
+ model = AutoModel.from_pretrained("openai/clip-vit-large-patch14-336")
258
 
259
+
260
+ ```
261
 
262
+
263
+ </details>
264