multimodalart's picture
Upload folder using huggingface_hub
0d5723f
---
license: other
license_name: bespoke-lora-trained-license
license_link: https://multimodal.art/civitai-licenses?allowNoCredit=True&allowCommercialUse=Rent&allowDerivatives=True&allowDifferentLicense=False
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
- lineart
- vector
- simple
- style
- vector-art
- vector art
- complex
- vector illustration
- vector style
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: vector
widget:
- text: ' '
output:
url: >-
3823589.jpeg
- text: ' '
output:
url: >-
3823606.jpeg
- text: ' '
output:
url: >-
3822700.jpeg
- text: ' '
output:
url: >-
3822702.jpeg
- text: ' '
output:
url: >-
3823587.jpeg
- text: ' '
output:
url: >-
3822063.jpeg
- text: ' '
output:
url: >-
3823818.jpeg
- text: ' '
output:
url: >-
3823823.jpeg
- text: ' '
output:
url: >-
3823826.jpeg
- text: ' '
output:
url: >-
3823850.jpeg
---
# Doctor Diffusion's Controllable Vector Art XL LoRA
<Gallery />
## Model description
<p>This LoRA was trained exclusively on modified and captioned CC0/Pubic Domain images by myself! <br /><br />USE: "<strong>vector</strong>" with v2<br />or<br />"<strong>vctr artstyle</strong>" with v1</p><p><br /><strong><span style="color:rgb(219, 222, 225)">You can control the level of detail and type of vector art and if there are outlines with these prompts: </span></strong><br /><br />For <strong>color results</strong> use:<br />"<strong>simple</strong> details"<br />"<strong>complex </strong>details"<br />"<strong>outlines</strong>"<br />"solid color background"<br /><br />For <strong>black and white line art</strong> use:<br />"<strong>black line art</strong>"<br />"white background"<br /></p>
## Trigger words
You should use `vector` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/DoctorDiffusion/doctor-diffusion-s-controllable-vector-art-xl-lora/tree/main) them in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('DoctorDiffusion/doctor-diffusion-s-controllable-vector-art-xl-lora', weight_name='DD-vector-v2.safetensors')
image = pipeline('`vector`').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)