commit files to HF hub
Browse files
README.md
CHANGED
@@ -4,17 +4,7 @@ tags:
|
|
4 |
- stable-diffusion
|
5 |
- text-to-image
|
6 |
- openvino
|
7 |
-
|
8 |
-
extra_gated_prompt: |-
|
9 |
-
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
|
10 |
-
The CreativeML OpenRAIL License specifies:
|
11 |
-
|
12 |
-
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
|
13 |
-
2. CompVis claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
|
14 |
-
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
|
15 |
-
Please read the full license carefully here: https://huggingface.co/spaces/CompVis/stable-diffusion-license
|
16 |
-
|
17 |
-
extra_gated_heading: Please read the LICENSE to access this model
|
18 |
---
|
19 |
|
20 |
# OpenVINO Stable Diffusion
|
@@ -23,16 +13,18 @@ extra_gated_heading: Please read the LICENSE to access this model
|
|
23 |
|
24 |
This repository contains the models from [nitrosocke/Nitro-Diffusion](https://huggingface.co/nitrosocke/Nitro-Diffusion) converted to
|
25 |
OpenVINO, for accelerated inference on CPU or Intel GPU with OpenVINO's integration into Optimum:
|
26 |
-
[optimum-intel](https://github.com/huggingface/optimum-intel#openvino).
|
27 |
-
|
28 |
|
29 |
-
|
|
|
|
|
30 |
including Transformers and OpenVINO. For more detailed steps, please see this [installation guide](https://github.com/helena-intel/optimum-intel/wiki/OpenVINO-Integration-Installation-Guide).
|
31 |
|
32 |
The simplest way to generate an image with stable diffusion takes only two lines of code, as shown below. The first line downloads the
|
33 |
model from the Hugging Face hub (if it has not been downloaded before) and loads it; the second line generates an image.
|
34 |
|
35 |
-
```
|
36 |
from optimum.intel.openvino import OVStableDiffusionPipeline
|
37 |
|
38 |
stable_diffusion = OVStableDiffusionPipeline.from_pretrained("nitrosocke/Nitro-Diffusion")
|
|
|
4 |
- stable-diffusion
|
5 |
- text-to-image
|
6 |
- openvino
|
7 |
+
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
8 |
---
|
9 |
|
10 |
# OpenVINO Stable Diffusion
|
|
|
13 |
|
14 |
This repository contains the models from [nitrosocke/Nitro-Diffusion](https://huggingface.co/nitrosocke/Nitro-Diffusion) converted to
|
15 |
OpenVINO, for accelerated inference on CPU or Intel GPU with OpenVINO's integration into Optimum:
|
16 |
+
[optimum-intel](https://github.com/huggingface/optimum-intel#openvino). The model weights are stored with FP16
|
17 |
+
precision, which reduces the size of the model by half.
|
18 |
|
19 |
+
Please check out the [source model repository](https://huggingface.co/nitrosocke/Nitro-Diffusion) for more information about the model and its license.
|
20 |
+
|
21 |
+
To install the requirements for this demo, do `pip install optimum[openvino, diffusers]`. This installs all the necessary dependencies,
|
22 |
including Transformers and OpenVINO. For more detailed steps, please see this [installation guide](https://github.com/helena-intel/optimum-intel/wiki/OpenVINO-Integration-Installation-Guide).
|
23 |
|
24 |
The simplest way to generate an image with stable diffusion takes only two lines of code, as shown below. The first line downloads the
|
25 |
model from the Hugging Face hub (if it has not been downloaded before) and loads it; the second line generates an image.
|
26 |
|
27 |
+
```python
|
28 |
from optimum.intel.openvino import OVStableDiffusionPipeline
|
29 |
|
30 |
stable_diffusion = OVStableDiffusionPipeline.from_pretrained("nitrosocke/Nitro-Diffusion")
|