Apply for community grant: Academic project (gpu)

#1
by aningineer - opened

Paper link: https://arxiv.org/abs/2402.13573

Abstract: Attention has been crucial for image diffusion models, however, their quadratic computational complexity limits the sizes of images we can process within reasonable time and memory constraints. This paper investigates the importance of dense attention in generative image models, which often contain redundant features, making them suitable for sparser attention mechanisms. We propose a novel training-free method ToDo that relies on token downsampling of key and value tokens to accelerate Stable Diffusion inference by up to 2x for common sizes and up to 4.5x or more for high resolutions like 2048x2048. We demonstrate that our approach outperforms previous methods in balancing efficient throughput and fidelity.

Screenshot 2024-02-26 at 3.55.33 pm.png

Hi @aningineer , we've assigned ZeroGPU to this Space. Please check the usage section of this page so your Space can run on ZeroGPU.

Hi @hysts thanks for enabling ZeroGPU support for our space! I followed the guide on the link above, but we're seeing this error:

Traceback (most recent call last):
  File "/home/user/app/app.py", line 12, in <module>
    pipe.enable_xformers_memory_efficient_attention()
  File "/home/user/.local/lib/python3.10/site-packages/diffusers/pipelines/pipeline_utils.py", line 2035, in enable_xformers_memory_efficient_attention
    self.set_use_memory_efficient_attention_xformers(True, attention_op)
  File "/home/user/.local/lib/python3.10/site-packages/diffusers/pipelines/pipeline_utils.py", line 2061, in set_use_memory_efficient_attention_xformers
    fn_recursive_set_mem_eff(module)
  File "/home/user/.local/lib/python3.10/site-packages/diffusers/pipelines/pipeline_utils.py", line 2051, in fn_recursive_set_mem_eff
    module.set_use_memory_efficient_attention_xformers(valid, attention_op)
  File "/home/user/.local/lib/python3.10/site-packages/diffusers/models/modeling_utils.py", line 261, in set_use_memory_efficient_attention_xformers
    fn_recursive_set_mem_eff(module)
  File "/home/user/.local/lib/python3.10/site-packages/diffusers/models/modeling_utils.py", line 257, in fn_recursive_set_mem_eff
    fn_recursive_set_mem_eff(child)
  File "/home/user/.local/lib/python3.10/site-packages/diffusers/models/modeling_utils.py", line 257, in fn_recursive_set_mem_eff
    fn_recursive_set_mem_eff(child)
  File "/home/user/.local/lib/python3.10/site-packages/diffusers/models/modeling_utils.py", line 257, in fn_recursive_set_mem_eff
    fn_recursive_set_mem_eff(child)
  File "/home/user/.local/lib/python3.10/site-packages/diffusers/models/modeling_utils.py", line 254, in fn_recursive_set_mem_eff
    module.set_use_memory_efficient_attention_xformers(valid, attention_op)
  File "/home/user/.local/lib/python3.10/site-packages/diffusers/models/modeling_utils.py", line 261, in set_use_memory_efficient_attention_xformers
    fn_recursive_set_mem_eff(module)
  File "/home/user/.local/lib/python3.10/site-packages/diffusers/models/modeling_utils.py", line 257, in fn_recursive_set_mem_eff
    fn_recursive_set_mem_eff(child)
  File "/home/user/.local/lib/python3.10/site-packages/diffusers/models/modeling_utils.py", line 257, in fn_recursive_set_mem_eff
    fn_recursive_set_mem_eff(child)
  File "/home/user/.local/lib/python3.10/site-packages/diffusers/models/modeling_utils.py", line 254, in fn_recursive_set_mem_eff
    module.set_use_memory_efficient_attention_xformers(valid, attention_op)
  File "/home/user/.local/lib/python3.10/site-packages/diffusers/models/attention_processor.py", line 273, in set_use_memory_efficient_attention_xformers
    raise e
  File "/home/user/.local/lib/python3.10/site-packages/diffusers/models/attention_processor.py", line 268, in set_use_memory_efficient_attention_xformers
    torch.randn((1, 2, 40), device="cuda"),
  File "/home/user/.local/lib/python3.10/site-packages/torch/cuda/__init__.py", line 302, in _lazy_init
    torch._C._cuda_init()
  File "/home/user/.local/lib/python3.10/site-packages/spaces/zero/torch.py", line 90, in _cuda_init_raise
    raise RuntimeError(
RuntimeError: CUDA must not be initialized in the main process on Spaces with Stateless GPU environment.
You can look at this Stacktrace to find out which part of your code triggered a CUDA init

Any suggestions on how we can solve this?

Cheers,
Aninda

@aningineer Thanks for checking!

Can you try removing this line? Looks like this requires CUDA, but as it's called in the function here as well, so I think we can remove it.

Also, ZeroGPU is only compatible with gradio 4.x, so can you update this line to use the latest 4.19.2?
https://huggingface.co/spaces/aningineer/ToDo/blob/6b566c6649117db13c6b88f54dd00b9de5d889bc/README.md?code=true#L5

Amazing, thanks for your help @hysts !

We have it up and running now - looking forward to hearing feedback from the community! 🤗

@aningineer Awesome!

BTW, FYI, you can use gr.Markdown to show your title instead of gr.Label as well.
For example,

import gradio as gr

css = """
h1 {
  text-align: center;
  display: block;
}
"""

with gr.Blocks(css=css) as demo:
    gr.Markdown("# Your Title")
    gr.Button()

if __name__ == "__main__":
    demo.launch()

will be rendered like this:

Also, FYI, you might want to add your arXiv link to README.md so your Space appears in the arXiv page. https://huggingface.co/docs/hub/spaces-add-to-arxiv#how-to-add-a-space-to-arxiv

Fantastic, thanks for the tips, I've incorporated those changes now 🔥

Sign up or log in to comment