--- base_model: stabilityai/stable-diffusion-2-1-base library_name: diffusers license: creativeml-openrail-m tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - controlnet - diffusers-training inference: true --- # controlnet-manhattan23/output_train_colormap_coconut These are controlnet weights trained on stabilityai/stable-diffusion-2-1-base with new type of conditioning. You can find some example images below. prompt: A beautiful woman taking a picture with her smart phone.,People underneath an arched bridge near the water. ![images_0)](./images_0.png) prompt: A young man bending next to a toilet.,A man is kneeling and holding on to a toilet. ![images_1)](./images_1.png) prompt: Two people are sitting on chairs talking on at a corner.,Two men sitting on the street in front of a building. ![images_2)](./images_2.png) ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]