Apply for community grant: Academic project (gpu)

#1
by nowsyn - opened
Owner
โ€ข
edited Jul 5

Hi, AnyControl is a multi-control image synthesis framework. Please refer to https://github.com/open-mmlab/AnyControl for detailed information. I want to deploy AnyControl with zeroGPU but fail. There are some custom cuda operations to be compiled manually. Could you please assign A10 for AnyControl?

Hi @nowsyn , we can assign a normal GPU as well, but let me check about the issue first.

The error log shows

=== Application stopped (exit code: 1) at 2024-07-05 03:48:25.914832786 UTC ===
Traceback (most recent call last):
  File "/home/user/app/app.py", line 16, in <module>
    from annotator.midas import MidasDetector
  File "/home/user/app/annotator/midas/__init__.py", line 6, in <module>
    from .api import MiDaSInference
  File "/home/user/app/annotator/midas/api.py", line 9, in <module>
    from .midas.dpt_depth import DPTDepthModel
  File "/home/user/app/annotator/midas/midas/dpt_depth.py", line 6, in <module>
    from .blocks import (
  File "/home/user/app/annotator/midas/midas/blocks.py", line 4, in <module>
    from .vit import (
  File "/home/user/app/annotator/midas/midas/vit.py", line 3, in <module>
    import timm
  File "/usr/local/lib/python3.10/site-packages/timm/__init__.py", line 2, in <module>
    from .models import create_model, list_models, is_model, list_modules, model_entrypoint, \
  File "/usr/local/lib/python3.10/site-packages/timm/models/__init__.py", line 1, in <module>
    from .byoanet import *
  File "/usr/local/lib/python3.10/site-packages/timm/models/byoanet.py", line 16, in <module>
    from .byobnet import ByoBlockCfg, ByoModelCfg, ByobNet, interleave_blocks
  File "/usr/local/lib/python3.10/site-packages/timm/models/byobnet.py", line 36, in <module>
    from .helpers import build_model_with_cfg
  File "/usr/local/lib/python3.10/site-packages/timm/models/helpers.py", line 18, in <module>
    from .layers import Conv2dSame, Linear
  File "/usr/local/lib/python3.10/site-packages/timm/models/layers/__init__.py", line 10, in <module>
    from .conv_bn_act import ConvBnAct
  File "/usr/local/lib/python3.10/site-packages/timm/models/layers/conv_bn_act.py", line 8, in <module>
    from .create_norm_act import convert_norm_act
  File "/usr/local/lib/python3.10/site-packages/timm/models/layers/create_norm_act.py", line 16, in <module>
    from .norm_act import BatchNormAct2d, GroupNormAct
  File "/usr/local/lib/python3.10/site-packages/timm/models/layers/norm_act.py", line 7, in <module>
    from .create_act import get_act_layer
  File "/usr/local/lib/python3.10/site-packages/timm/models/layers/create_act.py", line 8, in <module>
    from .activations_me import *
  File "/usr/local/lib/python3.10/site-packages/timm/models/layers/activations_me.py", line 105, in <module>
    def hard_sigmoid_jit_bwd(x, grad_output):
  File "/usr/local/lib/python3.10/site-packages/torch/jit/_script.py", line 1341, in script
    fn = torch._C._jit_script_compile(
  File "/usr/local/lib/python3.10/site-packages/torch/jit/annotations.py", line 71, in get_signature
    signature = try_real_annotations(fn, loc)
  File "/usr/local/lib/python3.10/site-packages/torch/jit/annotations.py", line 278, in try_real_annotations
    arg_types = [ann_to_type(p.annotation, loc)
  File "/usr/local/lib/python3.10/site-packages/torch/jit/annotations.py", line 278, in <listcomp>
    arg_types = [ann_to_type(p.annotation, loc)
  File "/usr/local/lib/python3.10/site-packages/torch/jit/annotations.py", line 422, in ann_to_type
    raise ValueError(f"Unknown type annotation: '{ann}' at {loc.highlight()}")
ValueError: Unknown type annotation: 'Any' at   File "/usr/local/lib/python3.10/site-packages/timm/models/layers/activations_me.py", line 106


@torch

	.jit.script
def hard_sigmoid_jit_bwd(x, grad_output):
    m = torch.ones_like(x) * ((x >= -3.) & (x <= 3.)) / 6.
        ~~~~~~~~~~~~~~~ <--- HERE
    return grad_output * m

=== Application stopped (exit code: 1) at 2024-07-05 03:57:51.780535533 UTC ===

but you can avoid this error by using the latest timm or by doing

torch.jit.script = lambda f: f

before importing timm.

Does this fix your problem? Or are there more issues?

Owner

Hi, thanks. I fix this error by adding torch.jit.script = lambda f: f before importing timm.

However, I still do not know how to compile CUDA operations manually under zeroGPU as there is no CUDA_HOME environment variable. Please see this error log.
Screenshot 2024-07-05 at 13.21.36.png

@nowsyn Thanks for checking. Would it be possible to build a wheel for the package in your local environment with CUDA and add it to your Space?
In the ZeroGPU environment, CUDA is not available at startup, but it's often possible to pre-build a wheel and install it at startup, for example see https://huggingface.co/spaces/dylanebert/LGM-mini/blob/23d923b0f055aea3a131a3ad7d99611c21a9bb54/app.py#L11-L15.

Owner

Thanks a lot! I build a wheel and it works~

Sign up or log in to comment