aykamko's picture
Update README.md
4acdd6d verified
|
raw
history blame
5.94 kB
---
license: other
license_name: playground-v2dot5-community
license_link: https://huggingface.co/playgroundai/playground-v2.5-1024px-aesthetic/blob/main/LICENSE.md
tags:
- text-to-image
- playground
inference:
parameters:
guidance_scale: 3.0
---
# Playground v2.5 – 1024px Aesthetic Model
This repository contains a model that generates highly aesthetic images of resolution 1024x1024, as well as portrait and landscape aspect ratios. You can use the model with Hugging Face 🧨 Diffusers.
![image/png](https://cdn-uploads.huggingface.co/production/uploads/636c0c4eaae2da3c76b8a9a3/HYUUGfU6SOCHsvyeISQ5Y.png)
**Playground v2.5** is a diffusion-based text-to-image generative model, and a successor to [Playground v2](https://huggingface.co/playgroundai/playground-v2-1024px-aesthetic).
Playground v2.5 is the state-of-the-art open-source model in aesthetic quality. Our user studies demonstrate that our model outperforms SDXL, Playground v2, PIXART-α, DALL-E 3, and Midjourney 5.2.
For details on the development and training of our model, please refer to our blog post <span style="color: red;">[link]</span> and technical report <span style="color: red;">[link]</span>
### Model Description
- **Developed by:** [Playground](https://playground.com)
- **Model type:** Diffusion-based text-to-image generative model
- **License:** [Playground v2.5 Community License](https://huggingface.co/playgroundai/playground-v2.5-1024px-aesthetic/blob/main/LICENSE.md)
- **Summary:** This model generates images based on text prompts. It is a Latent Diffusion Model that uses two fixed, pre-trained text encoders (OpenCLIP-ViT/G and CLIP-ViT/L). It follows the same architecture as [Stable Diffusion XL](https://huggingface.co/docs/diffusers/en/using-diffusers/sdxl).
### Using the model with 🧨 Diffusers
Install diffusers >= 0.26.0 and some dependencies:
```
pip install transformers accelerate safetensors
```
To run our model, you will need to use our custom pipeline from this gist: https://gist.github.com/aykamko/402e948a8fdbbc9613f9978802d90194
**Notes:**
- Only the Euler, Heun, and DPM++ 2M Karras schedulers have been tested
- We recommend using `guidance_scale=7.0` for the Euler/Heun, and `guidance_scale=5.0` for DPM++ 2M Karras
Then, run the following snippet:
```python
# copy/paste pipeline code here from gist: https://gist.github.com/aykamko/402e948a8fdbbc9613f9978802d90194
pipe = PlaygroundV2dot5Pipeline.from_pretrained(
"playgroundai/playground-v2.5-1024px-aesthetic",
torch_dtype=torch.float16,
use_safetensors=True,
add_watermarker=False,
variant="fp16",
)
pipe.to("cuda")
# # Optional: use DPM++ 2M Karras scheduler for improved quality on small details
# from diffusers import DPMSolverMultistepScheduler
# pipe.scheduler = DPMSolverMultistepScheduler(**common_config, use_karras_sigmas=True)
prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"
image = pipe(prompt=prompt, guidance_scale=7.0).images[0]
```
### Using the model with Automatic1111/ComfyUI
Support coming soon. We will update this model card with instructions when ready.
### User Studies
This model card only provides a brief summary of our user study results. For extensive details on how we perform user studies, please check out our technical report: <span style="color: red;">[link]</span>
We conducted studies to measure overall aesthetic quality, as well as for the specific areas we aimed to improve with Playground v2.5, namely multi aspect ratios and human preference alignment.
The aesthetic quality of Playground v2.5 dramatically outperforms the current state-of-the-art open source models SDXL and PIXART-α, as well as Playground v2. Because the performance differential between Playground V2.5 and SDXL was so large, we also tested our aesthetic quality against world-class closed-source models like DALL-E 3 and Midjourney 5.2, and found that Playground v2.5 outperforms them as well.
![image/png](https://cdn-uploads.huggingface.co/production/uploads/63855d851769b7c4b10e1f76/V7LFNzgoQJnL__ndU0CnE.png)
Similarly, for multi aspect ratios, we outperform SDXL by a large margin.
![image/png](https://cdn-uploads.huggingface.co/production/uploads/636c0c4eaae2da3c76b8a9a3/xMB0r-CmR3N6dABFlcV71.png)
Next, we benchmark Playground v2.5 specifically on people-related images, to test Human Preference Alignment. We compared Playground v2.5 against two commonly-used baseline models: SDXL and RealStock v2, a community fine-tune of SDXL that was trained on a realistic people dataset.
Playground v2.5 outperforms both baselines by a large margin.
![image/png](https://cdn-uploads.huggingface.co/production/uploads/636c0c4eaae2da3c76b8a9a3/7c-8Stw52OsNtUjse8Slv.png)
Lastly, we report metrics using our MJHQ-30K benchmark which we [open-sourced](https://huggingface.co/datasets/playgroundai/MJHQ-30K) with the v2 release. We report both the overall FID and per category FID. All FID metrics are computed at resolution 1024x1024. Our results show that Playground v2.5 outperforms both Playground v2 and SDXL in overall FID and all category FIDs, especially in the people and fashion categories. This is in line with the results of the user study, which indicates a correlation between human preferences and the FID score of the MJHQ-30K benchmark.
![image/png](https://cdn-uploads.huggingface.co/production/uploads/636c0c4eaae2da3c76b8a9a3/7tyYDPGUtokh-k18XDSte.png)
### How to cite us
<span style="color: red;">TODO: Link to the technical report</span>
```
@misc{playground-v2.5,
url={[https://huggingface.co/playgroundai/playground-v2.5-1024px-aesthetic](https://huggingface.co/playgroundai/playground-v2.5-1024px-aesthetic)},
title={Playground v2.5: Three Insights for Achieving State of the Art in Text-to-Image Generation},
author={Li, Daiqing and Kamko, Aleks and Sabet, Ali and Akhgari, Ehsan and Xu, Linmiao and Doshi, Suhail}
}
```