File size: 2,906 Bytes
cdf87c1
 
 
 
 
 
f3dd6b0
cdf87c1
 
 
 
 
 
 
 
 
 
 
 
 
b0d88f1
cdf87c1
 
 
 
 
 
 
 
908ff83
cdf87c1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
908ff83
cdf87c1
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
---
license: creativeml-openrail-m
tags:
- stable-diffusion
- text-to-image
- openvino
 
---

# OpenVINO Stable Diffusion 

## naclbit/trinart_stable_diffusion_v2

This repository contains the models from [naclbit/trinart_stable_diffusion_v2](https://huggingface.co/naclbit/trinart_stable_diffusion_v2) converted to
OpenVINO, for accelerated inference on CPU or Intel GPU with OpenVINO's integration into Optimum:
[optimum-intel](https://github.com/huggingface/optimum-intel#openvino). The model weights are stored with FP16
precision, which reduces the size of the model by half.

Please check out the [source model repository](https://huggingface.co/naclbit/trinart_stable_diffusion_v2) for more information about the model and its license.

To install the requirements for this demo, do `pip install "optimum-intel[openvino, diffusers]"`. This installs all the necessary dependencies, 
including Transformers and OpenVINO. For more detailed steps, please see this [installation guide](https://github.com/helena-intel/optimum-intel/wiki/OpenVINO-Integration-Installation-Guide).

The simplest way to generate an image with stable diffusion takes only two lines of code, as shown below. The first line downloads the 
model from the Hugging Face hub (if it has not been downloaded before) and loads it; the second line generates an image.

```python
from optimum.intel.openvino import OVStableDiffusionPipeline

stable_diffusion = OVStableDiffusionPipeline.from_pretrained("helenai/naclbit-trinart_stable_diffusion_v2-ov")
images = stable_diffusion("a random image").images
```

The following example code uses static shapes for even faster inference. Using larger image sizes will
require more memory and take longer to generate.

If you have an 11th generation or later Intel Core processor, you can use the integrated GPU for inference, and if you have an Intel 
discrete GPU, you can use that. Add the line `stable_diffusion.to("GPU")` before `stable_diffusion.compile()` in the example below. 
Model loading will take some time the first time, but will be faster after that, because the model will be cached. On GPU, for stable
diffusion only static shapes are supported at the moment.


```python
from optimum.intel.openvino.modeling_diffusion import OVStableDiffusionPipeline

batch_size = 1
num_images_per_prompt = 1
height = 256
width = 256

# load the model and reshape to static shapes for faster inference
model_id = "helenai/naclbit-trinart_stable_diffusion_v2-ov"
stable_diffusion = OVStableDiffusionPipeline.from_pretrained(model_id, compile=False)
stable_diffusion.reshape( batch_size=batch_size, height=height, width=width, num_images_per_prompt=num_images_per_prompt)
stable_diffusion.compile()

# generate image!
prompt = "a random image"
images = stable_diffusion(prompt, height=height, width=width, num_images_per_prompt=num_images_per_prompt).images
images[0].save("result.png")
```