Apply for community grant: Academic project (gpu)

#1
by unsubscribe - opened
InternLM org

deploy a internlm2-chat-20b-turbomind-4bits

Hi @unsubscribe , we've assigned ZeroGPU to this Space. Please check the usage section of this page so your Space can run on ZeroGPU. The @spaces decorator does nothing on non-ZeroGPU Spaces or local environments, so you can safely add it.

InternLM org

@hysts could you please assign a A10 GPU for this space

@unsubscribe The hardware of ZeroGPU instance is actually A10G, so it should be fine, but let us know if you encounter any issues with ZeroGPU.

InternLM org

How can I use space module to decorate my async functions?

InternLM org
edited Jan 22

@hysts Could you kindly provide some help such as more detailed documents about space.

@unsubscribe Sorry for the inconvenience.

I'm seeing this error in the log:

Traceback (most recent call last):
  File "/home/user/app/app.py", line 70, in <module>
    async def reset_local_func(instruction_txtbox: gr.Textbox,
  File "/home/user/.local/lib/python3.10/site-packages/spaces/zero/decorator.py", line 76, in GPU
    return _GPU(task, duration, enable_queue)
  File "/home/user/.local/lib/python3.10/site-packages/spaces/zero/decorator.py", line 97, in _GPU
    raise NotImplementedError
NotImplementedError

I think this is because you decorated the function that is not using GPU.

But anyway, I'm not sure if ZeroGPU works with your function, so I'll switch the hardware to normal a10g for now.
@cbensimon Can you check if this Space can run on ZeroGPU?

InternLM org

Thanks for your reply. I think normal a10g looks good for me.

Unfortunately ZeroGPU is not intended to be used with client / server frameworks like lmdeploy.
It's meant to be directly used with PyTorch code or higher-level libraries but it can't handle spawned servers like lmdeploy does
Furthermore the NotImplementedError is due to the fact that @spaces.GPU can't handle async functions

One possible solution would be to use lmdeploy as a library (probably the server-side code) so your app code directly deals with PyTorch code and should be able to run on ZeroGPU
(but then you lose the ease of use offered by lmdeploy with the simplicity of its client-side code I suppose)

@cbensimon Thank you for the detailed explanation!

@unsubscribe We've recently started to use ZeroGPU for community grants, so I asked if you could adapt your code to use ZeroGPU, but looks like this Space works better on normal hardware. Sorry for the trouble and thanks for looking into it.

Sign up or log in to comment