Safetensors
katuni4ka commited on
Commit
bf9d4cb
·
verified ·
1 Parent(s): 66a1c59

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +39 -12
README.md CHANGED
@@ -15,10 +15,10 @@ This is [stable-diffusion-v1-5/stable-diffusion-v1-5](https://huggingface.co/sta
15
 
16
  The provided OpenVINO™ IR model is compatible with:
17
 
18
- * OpenVINO version 2024.2.0 and higher
19
- * Optimum Intel 1.17.0 and higher
20
 
21
- ## Running Model Inference
22
 
23
  1. Install packages required for using [Optimum Intel](https://huggingface.co/docs/optimum/intel/index) integration with the OpenVINO backend:
24
 
@@ -28,23 +28,50 @@ pip install optimum[openvino]
28
 
29
  2. Run model inference:
30
 
31
-
32
  ```
33
- from optimum.intel.openvino import OVStableDiffusionPipeline
34
 
35
  model_id = "OpenVINO/stable-diffusion-v1-5-fp16-ov"
36
- pipeline = OVStableDiffusionPipeline.from_pretrained(model_id)
37
 
38
  prompt = "sailing ship in storm by Rembrandt"
39
- images = pipeline(prompt).images
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
40
  ```
41
 
42
- ## Usage examples
 
 
 
 
43
 
44
- * [OpenVINO notebooks](https://github.com/openvinotoolkit/openvino_notebooks):
45
- - [Latent Consistency Model using Optimum-Intel OpenVINO](https://github.com/openvinotoolkit/openvino_notebooks/blob/latest/notebooks/stable-diffusion-text-to-image/stable-diffusion-text-to-image.ipynb)
46
- * [OpenVINO GenAI](https://github.com/openvinotoolkit/openvino.genai):
47
- - [C++ image generation pipeline](https://github.com/openvinotoolkit/openvino.genai/tree/master/image_generation/stable_diffusion_1_5/cpp)
 
 
 
 
48
 
49
  ## Limitations
50
 
 
15
 
16
  The provided OpenVINO™ IR model is compatible with:
17
 
18
+ * OpenVINO version 2024.5.0 and higher
19
+ * Optimum Intel 1.21.0 and higher
20
 
21
+ ## Running Model Inference with [Optimum Intel](https://huggingface.co/docs/optimum/intel/index)
22
 
23
  1. Install packages required for using [Optimum Intel](https://huggingface.co/docs/optimum/intel/index) integration with the OpenVINO backend:
24
 
 
28
 
29
  2. Run model inference:
30
 
 
31
  ```
32
+ from optimum.intel.openvino import OVDiffusionPipeline
33
 
34
  model_id = "OpenVINO/stable-diffusion-v1-5-fp16-ov"
35
+ pipeline = OVDiffusionPipeline.from_pretrained(model_id)
36
 
37
  prompt = "sailing ship in storm by Rembrandt"
38
+ images = pipeline(prompt, num_inference_steps=4).images
39
+ ```
40
+
41
+ ## Running Model Inference with [OpenVINO GenAI](https://github.com/openvinotoolkit/openvino.genai)
42
+
43
+ 1. Install packages required for using OpenVINO GenAI.
44
+ ```
45
+ pip install huggingface_hub
46
+ pip install -U --pre --extra-index-url https://storage.openvinotoolkit.org/simple/wheels/nightly openvino openvino-tokenizers openvino-genai
47
+ ```
48
+
49
+ 2. Download model from HuggingFace Hub
50
+
51
+ ```
52
+ import huggingface_hub as hf_hub
53
+
54
+ model_id = "OpenVINO/stable-diffusion-v1-5-fp16-ov"
55
+ model_path = "stable-diffusion-v1-5-fp16-ov"
56
+
57
+ hf_hub.snapshot_download(model_id, local_dir=model_path)
58
+
59
  ```
60
 
61
+ 3. Run model inference:
62
+
63
+ ```
64
+ import openvino_genai as ov_genai
65
+ from PIL import Image
66
 
67
+ device = "CPU"
68
+ pipe = ov_genai.Text2ImagePipeline(model_path, device)
69
+
70
+ prompt = "sailing ship in storm by Rembrandt"
71
+ image_tensor = pipe.generate(prompt, num_inference_steps=20)
72
+ image = Image.fromarray(image_tensor.data[0])
73
+
74
+ ```
75
 
76
  ## Limitations
77