--- library_name: diffusers base_model: stabilityai/stable-diffusion-2-1-base license: apache-2.0 widget: - src: osm_tile_18_42048_101323.jpeg prompt: Satellite image features a city neighborhood tags: - controlnet - stable-diffusion - satellite-imagery - OSM pipeline_tag: image-to-image --- # Model Card for Model ID This is a ControlNet based model that synthesizes satellite images given OpenStreetMap Images. The base stable diffusion model used is [stable-diffusion-2-1-base](https://huggingface.co/stabilityai/stable-diffusion-2-1-base) (v2-1_512-ema-pruned.ckpt). * Use it with 🧨 [diffusers](#examples) * Use it with [controlnet](https://github.com/lllyasviel/ControlNet/tree/main?tab=readme-ov-file) repository ### Model Sources [optional] - **Repository:** [stable-diffusion](https://huggingface.co/stabilityai/stable-diffusion-2-1-base) - **Paper:** [Adding Conditional Control to Text-to-Image Diffusion Models](https://arxiv.org/abs/2302.05543) ## Examples ```python from diffusers import StableDiffusionControlNetPipeline, ControlNetModel import torch from PIL import Image img = Image.open("osm_tile_18_42048_101323.jpeg") controlnet = ControlNetModel.from_pretrained("MVRL/GeoSynth-OSM") pipe = StableDiffusionControlNetPipeline.from_pretrained("stabilityai/stable-diffusion-2-1-base", controlnet=controlnet) pipe = pipe.to("cuda:0") # generate image generator = torch.manual_seed(10345340) image = pipe( "Satellite image features a city neighborhood", generator=generator, image=img, ).images[0] image.save("generated_city.jpg") ``` ## Citation [optional] **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]