--- license: mit language: - en tags: - 'openvino ' - text-to-image pipeline_tag: text-to-image --- ## Model Descriptions: This repo contains OpenVino model files for SimianLuo's LCM_Dreamshaper_v7 int8 quantized. This 8 bit model is **1.4x** faster than `float32` model. ## Generation Results:

## Usage You can try out model using [Fast SD CPU](https://github.com/rupeshs/fastsdcpu) To run the model yourself, you can leverage the 🧨 Diffusers library: 1. Install the dependencies: ``` pip install optimum-intel openvino diffusers onnx ``` 2. Run the model: ```py from optimum.intel import OVLatentConsistencyModelPipeline pipe = OVLatentConsistencyModelPipeline.from_pretrained( "rupeshs/LCM-dreamshaper-v7-openvino-int8", ov_config={"CACHE_DIR": ""}, ) prompt = "sailing ship in storm by Leonardo da Vinci" images = pipe( prompt=prompt, width=512, height=512, num_inference_steps=4, guidance_scale=8.0, ).images images[0].save("out_image.png") ```