Stirling89 commited on
Commit
60de584
1 Parent(s): 3857c45

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -9
README.md CHANGED
@@ -248,15 +248,7 @@ which consists of images that are primarily limited to English descriptions.
248
  Texts and images from communities and cultures that use other languages are likely to be insufficiently accounted for.
249
  This affects the overall output of the model, as white and western cultures are often set as the default. Further, the
250
  ability of the model to generate content with non-English prompts is significantly worse than with English-language prompts.
251
-
252
- ### Safety Module
253
-
254
- The intended use of this model is with the [Safety Checker](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/safety_checker.py) in Diffusers.
255
- This checker works by checking model outputs against known hard-coded NSFW concepts.
256
- The concepts are intentionally hidden to reduce the likelihood of reverse-engineering this filter.
257
- Specifically, the checker compares the class probability of harmful concepts in the embedding space of the `CLIPTextModel` *after generation* of the images.
258
- The concepts are passed into the model with the generated image and compared to a hand-engineered weight for each NSFW concept.
259
-
260
 
261
  ## Training
262
 
 
248
  Texts and images from communities and cultures that use other languages are likely to be insufficiently accounted for.
249
  This affects the overall output of the model, as white and western cultures are often set as the default. Further, the
250
  ability of the model to generate content with non-English prompts is significantly worse than with English-language prompts.
251
+
 
 
 
 
 
 
 
 
252
 
253
  ## Training
254