File size: 1,469 Bytes
787a384
 
 
 
 
 
 
 
 
 
c446475
 
 
 
 
787a384
 
 
 
 
b8ec456
 
 
 
dfe7f31
019b7db
 
472f9c7
167e83a
472f9c7
 
019b7db
 
 
 
787a384
019b7db
 
 
 
 
 
 
 
 
 
 
 
 
 
c446475
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
datasets:
- hahminlew/kream-product-blip-captions
language:
- en
library_name: diffusers
---
    
# LoRA text2image fine-tuning - NouRed/sd-fashion-products
These are LoRA adaption weights for stabilityai/stable-diffusion-2. The weights were fine-tuned on the hahminlew/kream-product-blip-captions dataset. You can find some example images in the following. 

![img_0](./image_0.jpg)
![img_1](./image_1.jpg)
![img_2](./image_2.jpg)
![img_3](./image_3.jpg)

## Usage
```python
import torch
from diffusers import DiffusionPipeline

# Load Previous Pipeline
pipeline = DiffusionPipeline.from_pretrained(
    "stabilityai/stable-diffusion-2", revision=None, variant=None, torch_dtype=torch_dtype=torch.float32
)
pipeline = pipeline.to(accelerator.device)

# Load attention processors
pipeline.unet.load_attn_procs("NouRed/sd-fashion-products")

# Run Inference
generator = torch.Generator(device=accelerator.device)
seed = 42
if seed is not None:
    generator = generator.manual_seed(seed)

prompt = "outer, The North Face x Supreme White Label Nuptse Down Jacket Cream, a photography of a white puffer jacket with a red box logo on the front."
image = pipeline(prompt, num_inference_steps=30, generator=generator).images[0]  

# Save Generated Product
image.save("red_box_jacket.png")
```