nazneen commited on
Commit
3d5ed23
1 Parent(s): 4bb648a

model documentation

Browse files
Files changed (1) hide show
  1. README.md +182 -0
README.md ADDED
@@ -0,0 +1,182 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - clip
4
+ ---
5
+
6
+ # Model Card for stable-diffusion-safety-checker
7
+
8
+ # Model Details
9
+
10
+ ## Model Description
11
+
12
+ More information needed
13
+
14
+ - **Developed by:** More information needed
15
+ - **Shared by [Optional]:** CompVis
16
+ - **Model type:** Token Classification
17
+ - **Language(s) (NLP):** More information needed
18
+ - **License:** More information needed
19
+ - **Parent Model:** [CLIP](https://huggingface.co/openai/clip-vit-large-patch14)
20
+ - **Resources for more information:**
21
+ - [CLIP Paper](https://arxiv.org/abs/2103.00020)
22
+
23
+
24
+ # Uses
25
+
26
+
27
+ ## Direct Use
28
+ This model can be used for the task of token classification.
29
+
30
+ The CLIP model devlopers note in their [model card](https://huggingface.co/openai/clip-vit-large-patch14) :
31
+
32
+ >The primary intended users of these models are AI researchers.
33
+
34
+ We primarily imagine the model will be used by researchers to better understand robustness, generalization, and other capabilities, biases, and constraints of computer vision models.
35
+
36
+
37
+
38
+ ## Downstream Use [Optional]
39
+
40
+ More information needed.
41
+
42
+ ## Out-of-Scope Use
43
+
44
+ The model should not be used to intentionally create hostile or alienating environments for people.
45
+
46
+ # Bias, Risks, and Limitations
47
+
48
+
49
+ Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
50
+
51
+ The CLIP model devlopers note in their [model card](https://huggingface.co/openai/clip-vit-large-patch14) :
52
+ > We find that the performance of CLIP - and the specific biases it exhibits - can depend significantly on class design and the choices one makes for categories to include and exclude. We tested the risk of certain kinds of denigration with CLIP by classifying images of people from Fairface into crime-related and non-human animal categories. We found significant disparities with respect to race and gender. Additionally, we found that these disparities could shift based on how the classes were constructed.
53
+
54
+ > We also tested the performance of CLIP on gender, race and age classification using the Fairface dataset (We default to using race categories as they are constructed in the Fairface dataset.) in order to assess quality of performance across different demographics. We found accuracy >96% across all races for gender classification with ‘Middle Eastern’ having the highest accuracy (98.4%) and ‘White’ having the lowest (96.5%). Additionally, CLIP averaged ~93% for racial classification and ~63% for age classification
55
+
56
+ ## Recommendations
57
+
58
+
59
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
60
+
61
+ # Training Details
62
+
63
+ ## Training Data
64
+
65
+ More information needed
66
+
67
+ ## Training Procedure
68
+
69
+
70
+ ### Preprocessing
71
+
72
+ More information needed
73
+
74
+
75
+
76
+ ### Speeds, Sizes, Times
77
+
78
+ More information needed
79
+
80
+
81
+
82
+ # Evaluation
83
+
84
+
85
+ ## Testing Data, Factors & Metrics
86
+
87
+ ### Testing Data
88
+
89
+ More information needed
90
+
91
+ ### Factors
92
+ More information needed
93
+
94
+ ### Metrics
95
+
96
+ More information needed
97
+
98
+
99
+ ## Results
100
+
101
+ More information needed
102
+
103
+
104
+ # Model Examination
105
+
106
+ More information needed
107
+
108
+ # Environmental Impact
109
+
110
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
111
+
112
+ - **Hardware Type:** More information needed
113
+ - **Hours used:** More information needed
114
+ - **Cloud Provider:** More information needed
115
+ - **Compute Region:** More information needed
116
+ - **Carbon Emitted:** More information needed
117
+
118
+ # Technical Specifications [optional]
119
+
120
+ ## Model Architecture and Objective
121
+
122
+ The CLIP model devlopers note in their [model card](https://huggingface.co/openai/clip-vit-large-patch14) :
123
+
124
+ > The base model uses a ViT-L/14 Transformer architecture as an image encoder and uses a masked self-attention Transformer as a text encoder. These encoders are trained to maximize the similarity of (image, text) pairs via a contrastive loss.
125
+
126
+ ## Compute Infrastructure
127
+
128
+ More information needed
129
+
130
+ ### Hardware
131
+
132
+
133
+ More information needed
134
+
135
+ ### Software
136
+
137
+ More information needed.
138
+
139
+ # Citation
140
+
141
+
142
+ **BibTeX:**
143
+
144
+ More information needed
145
+
146
+
147
+
148
+
149
+ **APA:**
150
+
151
+ More information needed
152
+
153
+ # Glossary [optional]
154
+
155
+ More information needed
156
+
157
+ # More Information [optional]
158
+ More information needed
159
+
160
+ # Model Card Authors [optional]
161
+
162
+ CompVis in collaboration with Ezi Ozoani and the Hugging Face team
163
+
164
+ # Model Card Contact
165
+
166
+ More information needed
167
+
168
+ # How to Get Started with the Model
169
+
170
+ Use the code below to get started with the model.
171
+
172
+ <details>
173
+ <summary> Click to expand </summary>
174
+
175
+ ```python
176
+ from transformers import AutoProcessor, SafetyChecker
177
+
178
+ processor = AutoProcessor.from_pretrained("CompVis/stable-diffusion-safety-checker")
179
+
180
+ model = SafetyChecker.from_pretrained("CompVis/stable-diffusion-safety-checker")
181
+ ```
182
+ </details>