pcuenq HF staff commited on
Commit
b2a2beb
1 Parent(s): c5e22b8

License update (#1)

Browse files
Files changed (1) hide show
  1. README.md +4 -1
README.md CHANGED
@@ -1,5 +1,5 @@
1
  ---
2
- license: openrail++
3
  tags:
4
  - stable-diffusion
5
  - text-to-image
@@ -7,6 +7,9 @@ tags:
7
  ---
8
 
9
  # Stable Diffusion v2 Model Card
 
 
 
10
  This model card focuses on the model associated with the Stable Diffusion v2 model, available [here](https://github.com/Stability-AI/stablediffusion).
11
 
12
  The model is trained from scratch 550k steps at resolution `256x256` on a subset of [LAION-5B](https://laion.ai/blog/laion-5b/) filtered for explicit pornographic material, using the [LAION-NSFW classifier](https://github.com/LAION-AI/CLIP-based-NSFW-Detector) with `punsafe=0.1` and an [aesthetic score](https://github.com/christophschuhmann/improved-aesthetic-predictor) >= `4.5`. Then it is further trained for 850k steps at resolution `512x512` on the same dataset on images with resolution `>= 512x512`.
1
  ---
2
+ license: other
3
  tags:
4
  - stable-diffusion
5
  - text-to-image
7
  ---
8
 
9
  # Stable Diffusion v2 Model Card
10
+
11
+ This model was generated by Hugging Face using [Apple’s repository](https://github.com/apple/ml-stable-diffusion) which has [ASCL](https://github.com/apple/ml-stable-diffusion/blob/main/LICENSE.md).
12
+
13
  This model card focuses on the model associated with the Stable Diffusion v2 model, available [here](https://github.com/Stability-AI/stablediffusion).
14
 
15
  The model is trained from scratch 550k steps at resolution `256x256` on a subset of [LAION-5B](https://laion.ai/blog/laion-5b/) filtered for explicit pornographic material, using the [LAION-NSFW classifier](https://github.com/LAION-AI/CLIP-based-NSFW-Detector) with `punsafe=0.1` and an [aesthetic score](https://github.com/christophschuhmann/improved-aesthetic-predictor) >= `4.5`. Then it is further trained for 850k steps at resolution `512x512` on the same dataset on images with resolution `>= 512x512`.