justinpinkney commited on
Commit
1b72af7
1 Parent(s): 3d0f192

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -9
README.md CHANGED
@@ -108,13 +108,4 @@ Texts and images from communities and cultures that use other languages are like
108
  This affects the overall output of the model, as white and western cultures are often set as the default. Further, the
109
  ability of the model to generate content with non-English prompts is significantly worse than with English-language prompts.
110
 
111
- ### Safety Module
112
-
113
- The intended use of this model is with the [Safety Checker](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/safety_checker.py) in Diffusers.
114
- This checker works by checking model outputs against known hard-coded NSFW concepts.
115
- The concepts are intentionally hidden to reduce the likelihood of reverse-engineering this filter.
116
- Specifically, the checker compares the class probability of harmful concepts in the embedding space of the `CLIPModel` *after generation* of the images.
117
- The concepts are passed into the model with the generated image and compared to a hand-engineered weight for each NSFW concept.
118
-
119
-
120
  *This model card was written by: Justin Pinkney and is based on the [Stable Diffusion model card](https://huggingface.co/CompVis/stable-diffusion-v1-4).*
 
108
  This affects the overall output of the model, as white and western cultures are often set as the default. Further, the
109
  ability of the model to generate content with non-English prompts is significantly worse than with English-language prompts.
110
 
 
 
 
 
 
 
 
 
 
111
  *This model card was written by: Justin Pinkney and is based on the [Stable Diffusion model card](https://huggingface.co/CompVis/stable-diffusion-v1-4).*