--- license: apache-2.0 datasets: - Oysiyl/google-android-toy language: - en --- ### Demo You can try the demo [here](https://sdloraandroidtoy.streamlit.app/). For hosting the [frontend](https://github.com/dmitriy-kisil/sd_lora_android_toy_frontend) part [Streamlit Community Cloud](https://streamlit.io/cloud) and [Cerebrium](https://www.cerebrium.ai/) for the [backend](https://github.com/dmitriy-kisil/sd_lora_android_toy_backend) part were used. ### Model card Finetuned from SD 1.5 using LoRA. W&B [run](https://wandb.ai/logart1995/text2image-fine-tune/runs/2o98mhc7?workspace=user-logart1995). ### Inference ```py from diffusers import AutoPipelineForText2Image import torch pipe = AutoPipelineForText2Image.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16) pipe.load_lora_weights("Oysiyl/sd-lora-android-google-toy", weights="pytorch_lora_weights.safetensors") pipe = pipe.to("cuda") g = torch.Generator(device="cuda").manual_seed(42) image = pipe("An android toy near Eiffel tower", num_inference_steps=50, num_images_per_prompt=1, guidance_scale=7.5, temperature=1.0, generator=g).images[0] image.save("android_toy.png") ``` ### Output ![example](./android_toy.png)