--- license: other license_name: bespoke-lora-trained-license license_link: https://multimodal.art/civitai-licenses?allowNoCredit=False&allowCommercialUse=Rent&allowDerivatives=True&allowDifferentLicense=True tags: - text-to-image - stable-diffusion - lora - diffusers - template:sd-lora - space - horror - abstract - discodiffusion - cosmic - style - disco - styles base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: Cmchrr widget: - text: ' ' output: url: >- 1739484.jpeg - text: ' ' output: url: >- 1739450.jpeg - text: 'cosmic horror' output: url: >- 1736886.jpeg - text: 'cosmic horror' output: url: >- 1736889.jpeg - text: ' ' output: url: >- 1739456.jpeg - text: 'cosmic horror' output: url: >- 1736878.jpeg - text: 'cosmic horror' output: url: >- 1736875.jpeg - text: 'cosmic horror' output: url: >- 1736879.jpeg - text: 'cosmic horror' output: url: >- 1736874.jpeg - text: 'cosmic horror' output: url: >- 1736884.jpeg --- # Doctor Diffusion's CosmicDisco LoRA ## Model description

Works with XL 0.9 Base, XL 1.0 Base, & XL 1.0 Refiner Models.

Trained on a small selection of Cosmic Horror Disco Diffusion renders.

## Trigger words You should use `Cmchrr` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/DoctorDiffusion/doctor-diffusion-s-cosmicdisco-lora/tree/main) them in the Files & versions tab. ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('DoctorDiffusion/doctor-diffusion-s-cosmicdisco-lora', weight_name='DD-CosmicDisco.safetensors') image = pipeline('cosmic horror').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)