Jingya HF staff commited on
Commit
d32d6cd
1 Parent(s): 8a525d8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +37 -0
README.md CHANGED
@@ -1,3 +1,40 @@
1
  ---
2
  license: mit
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: mit
3
  ---
4
+
5
+ [`SimianLuo/LCM_Dreamshaper_v7`](https://huggingface.co/SimianLuo/LCM_Dreamshaper_v7) compiled on an AWS Inf2 instance. ***INF2/TRN1 ONLY***
6
+
7
+ ***How to use***
8
+
9
+ ```python
10
+ from optimum.neuron import NeuronLatentConsistencyModelPipeline
11
+
12
+ pipe = NeuronLatentConsistencyModelPipeline.from_pretrained("Jingya/LCM_Dreamshaper_v7_neuronx")
13
+
14
+ num_images_per_prompt = 2
15
+ prompt = ["Self-portrait oil painting, a beautiful cyborg with golden hair, 8k"] * num_images_per_prompt
16
+
17
+ images = pipe(prompt=prompt, num_inference_steps=4, guidance_scale=8.0).images
18
+ ```
19
+
20
+ If you are using a later neuron compiler version, you can compile the checkpoint yourself with the following lines via [`🤗 optimum-neuron`](https://huggingface.co/docs/optimum-neuron/index) (the compilation takes approximately 40 min):
21
+
22
+ ```python
23
+ from optimum.neuron import NeuronLatentConsistencyModelPipeline
24
+
25
+ model_id = "SimianLuo/LCM_Dreamshaper_v7"
26
+ num_images_per_prompt = 1
27
+ input_shapes = {"batch_size": 1, "height": 768, "width": 768, "num_images_per_prompt": num_images_per_prompt}
28
+ compiler_args = {"auto_cast": "matmul", "auto_cast_type": "bf16"}
29
+
30
+ stable_diffusion = NeuronLatentConsistencyModelPipeline.from_pretrained(
31
+ model_id, export=True, **compiler_args, **input_shapes
32
+ )
33
+ save_directory = "lcm_sd_neuron/"
34
+ stable_diffusion.save_pretrained(save_directory)
35
+
36
+ # Push to hub
37
+ stable_diffusion.push_to_hub(save_directory, repository_id="Jingya/LCM_Dreamshaper_v7_neuronx", use_auth_token=True)
38
+ ```
39
+
40
+ And feel free to make a pull request and contribute to this repo, thx 🤗!