File size: 2,768 Bytes
2e1c614 2c9efe6 2e1c614 2c9efe6 e9ec240 2c9efe6 8943a6e 2c9efe6 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 |
---
license: mit
language:
- en
pipeline_tag: text-to-image
tags:
- openvino
- text-to-image
---
Model Descriptions:
This repo contains OpenVino model files for [SimianLuo's LCM_Dreamshaper_v7](https://huggingface.co/SimianLuo/LCM_Dreamshaper_v7).
Generation Results:
By converting model to OpenVino format and using Intel(R) Xeon(R) Gold 5220R CPU @ 2.20GHz 24C/48T x 2 we can achieve following results compared to original PyTorch LCM.
Results time includes first compile and reshape phases and should be taken with grain of salt because benchmark was run using 2 socketed server which can underperform in those types of workload.
Number of images per batch is set to 1
|Run No.|Pytorch|OpenVino|Openvino w/reshape|
|-------|-------|--------|------------------|
|1 |15.5841|18.0010 |13.4928 |
|2 |12.4634|5.0208 |3.6855 |
|3 |12.1551|4.9462 |3.7228 |
Number of images per batch is set to 4
|Run No.|Pytorch|OpenVino|Openvino w/reshape|
|-------|-------|--------|------------------|
|1 |31.3666|33.1488 |25.7044 |
|2 |33.4797|17.7456 |12.8295 |
|3 |28.6561|17.9216 |12.7198 |
To run the model yourself, you can leverage the 🧨 Diffusers/🤗 Optimum library:
1. Install the library:
```
pip install diffusers transformers accelerate optimum
pip install --upgrade-strategy eager optimum[openvino]
```
2. Clone inference code:
```
git clone https://huggingface.co/deinferno/LCM_Dreamshaper_v7-openvino
cd LCM_Dreamshaper_v7-openvino
```
2. Run the model:
```py
from lcm_ov_pipeline import OVLatentConsistencyModelPipeline
from lcm_scheduler import LCMScheduler
model_id = "deinferno/LCM_Dreamshaper_v7-openvino"
scheduler = LCMScheduler.from_pretrained(model_id, subfolder = "scheduler")
# Use "compile = True" if you don't plan to reshape and recompile model after loading
# Don't forget to disabled OpenVino cache via "ov_config = {"CACHE_DIR":""}" because optimum won't use it anyway and will stay as dead weight in your RAM when loading pipeline again
pipe = OVLatentConsistencyModelPipeline.from_pretrained(model_id, scheduler = scheduler, compile = False, ov_config = {"CACHE_DIR":""})
prompt = "Self-portrait oil painting, a beautiful cyborg with golden hair, 8k"
# Can be set to 1~50 steps. LCM support fast inference even <= 4 steps. Recommend: 1~8 steps.
width = 512
height = 512
num_images = 1
batch_size = 1
num_inference_steps = 4
# Reshape and recompile for inference speed
pipe.reshape(batch_size=batch_size, height=height, width=width, num_images_per_prompt=num_images)
pipe.compile()
images = pipe(prompt=prompt, width=width, height=height, num_inference_steps=num_inference_steps, guidance_scale=8.0, output_type="pil").images
```
|