animemory commited on
Commit
e4bf77c
1 Parent(s): 8ad1d64

readme update

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -105,7 +105,7 @@ Go to [ComfyUI-Animemory-Loader](https://github.com/animEEEmpire/ComfyUI-Animemo
105
 
106
  3.Diffusers inference.
107
 
108
- The pipeline has not been merged yet. Please use the following code to setup the environment.
109
  ```shell
110
  git clone https://github.com/huggingface/diffusers.git
111
  cd ..
@@ -115,7 +115,7 @@ cp diffusers_animemory/* diffusers -r
115
  cd diffusers
116
  pip install .
117
  ```
118
- And then, you can use the following code to generate images.
119
 
120
  ```python
121
  from diffusers import AniMemoryPipeLine
@@ -137,7 +137,7 @@ images = pipe(prompt=prompt,
137
  images.save("output.png")
138
  ```
139
 
140
- Use `pipe.enable_sequential_cpu_offload()` to offload the model into CPU for less GPU memory cost (about 14.25 G,
141
  compared to 25.67 G if CPU offload is not enabled), but the inference time will increase significantly(5.18s v.s.
142
  17.74s on A100 40G).
143
 
 
105
 
106
  3.Diffusers inference.
107
 
108
+ - The pipeline has not been merged yet. Please use the following code to setup the environment.
109
  ```shell
110
  git clone https://github.com/huggingface/diffusers.git
111
  cd ..
 
115
  cd diffusers
116
  pip install .
117
  ```
118
+ - And then, you can use the following code to generate images.
119
 
120
  ```python
121
  from diffusers import AniMemoryPipeLine
 
137
  images.save("output.png")
138
  ```
139
 
140
+ - Use `pipe.enable_sequential_cpu_offload()` to offload the model into CPU for less GPU memory cost (about 14.25 G,
141
  compared to 25.67 G if CPU offload is not enabled), but the inference time will increase significantly(5.18s v.s.
142
  17.74s on A100 40G).
143