|
--- |
|
license: mit |
|
--- |
|
|
|
[`SimianLuo/LCM_Dreamshaper_v7`](https://huggingface.co/SimianLuo/LCM_Dreamshaper_v7) compiled on an AWS Inf2 instance. ***INF2/TRN1 ONLY*** |
|
|
|
***How to use*** |
|
|
|
```python |
|
from optimum.neuron import NeuronLatentConsistencyModelPipeline |
|
|
|
pipe = NeuronLatentConsistencyModelPipeline.from_pretrained("Jingya/LCM_Dreamshaper_v7_neuronx") |
|
|
|
num_images_per_prompt = 2 |
|
prompt = ["Self-portrait oil painting, a beautiful cyborg with golden hair, 8k"] * num_images_per_prompt |
|
|
|
images = pipe(prompt=prompt, num_inference_steps=4, guidance_scale=8.0).images |
|
``` |
|
|
|
If you are using a later neuron compiler version, you can compile the checkpoint yourself with the following lines via [`🤗 optimum-neuron`](https://huggingface.co/docs/optimum-neuron/index) (the compilation takes approximately 40 min): |
|
|
|
```python |
|
from optimum.neuron import NeuronLatentConsistencyModelPipeline |
|
|
|
model_id = "SimianLuo/LCM_Dreamshaper_v7" |
|
num_images_per_prompt = 1 |
|
input_shapes = {"batch_size": 1, "height": 768, "width": 768, "num_images_per_prompt": num_images_per_prompt} |
|
compiler_args = {"auto_cast": "matmul", "auto_cast_type": "bf16"} |
|
|
|
stable_diffusion = NeuronLatentConsistencyModelPipeline.from_pretrained( |
|
model_id, export=True, **compiler_args, **input_shapes |
|
) |
|
save_directory = "lcm_sd_neuron/" |
|
stable_diffusion.save_pretrained(save_directory) |
|
|
|
# Push to hub |
|
stable_diffusion.push_to_hub(save_directory, repository_id="Jingya/LCM_Dreamshaper_v7_neuronx", use_auth_token=True) |
|
``` |
|
|
|
And feel free to make a pull request and contribute to this repo, thx 🤗! |
|
|