Apply for community grant: Personal project (gpu)

#1
by nateraw - opened

Hey there, could I get an A10G small grant for this project?

Its an enhanced demo of musicgen-songstarter-v0.2 that lets you hum an idea and get a music sample out.

Hi @nateraw , we've assigned a10g-small to this Space.
As we recently started using ZeroGPU as the default hardware for grants, it would be nice if you could take a look at the org card of the ZeroGPU explorers org and check if your Space can run on ZeroGPU. I just sent an invitation to join the org. After joining the org, you should be able to see "Zero Nvidia A100" option in the Space Settings.

tysm! will look into zero gpu tomorrow/over the weekend.

@hysts is there a good way to verify if just decorating my __call__ fn here in the Predictor class will work without breaking this space for others if it's wrong?

@nateraw I think you can simply duplicate this Space privately and test if ZeroGPU works.

running into this on submission of example prediction when using zero:

Traceback (most recent call last):
  File "/usr/local/lib/python3.10/site-packages/gradio/queueing.py", line 527, in process_events
    response = await route_utils.call_process_api(
  File "/usr/local/lib/python3.10/site-packages/gradio/route_utils.py", line 261, in call_process_api
    output = await app.get_blocks().process_api(
  File "/usr/local/lib/python3.10/site-packages/gradio/blocks.py", line 1788, in process_api
    result = await self.call_function(
  File "/usr/local/lib/python3.10/site-packages/gradio/blocks.py", line 1340, in call_function
    prediction = await anyio.to_thread.run_sync(
  File "/usr/local/lib/python3.10/site-packages/anyio/to_thread.py", line 56, in run_sync
    return await get_async_backend().run_sync_in_worker_thread(
  File "/usr/local/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 2144, in run_sync_in_worker_thread
    return await future
  File "/usr/local/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 851, in run
    result = context.run(func, *args)
  File "/usr/local/lib/python3.10/site-packages/gradio/utils.py", line 759, in wrapper
    response = f(*args, **kwargs)
  File "/usr/local/lib/python3.10/site-packages/spaces/zero/wrappers.py", line 154, in gradio_handler
    worker.arg_queue.put(((args, kwargs), GradioPartialContext.get()))
  File "/usr/local/lib/python3.10/site-packages/spaces/utils.py", line 42, in put
    super().put(obj)
  File "/usr/local/lib/python3.10/multiprocessing/queues.py", line 371, in put
    obj = _ForkingPickler.dumps(obj)
  File "/usr/local/lib/python3.10/multiprocessing/reduction.py", line 51, in dumps
    cls(buf, protocol).dump(obj)
  File "/usr/local/lib/python3.10/site-packages/torch/multiprocessing/reductions.py", line 193, in reduce_tensor
    raise RuntimeError(
RuntimeError: Cowardly refusing to serialize non-leaf tensor which requires_grad, since autograd does not support crossing process boundaries.  If you just want to transfer the data, call detach() on the tensor before serializing (e.g., putting it on the queue).

any idea what this might be? (this is the full error log). If you have god access on the hub, feel free to poke around/edit this: https://huggingface.co/spaces/nateraw/singing-songstarter-zero

running into this on submission of example prediction when using zero:

if just decorating my __call__ fn here in the Predictor class will work

Ah, sorry. I should have checked the code and mention this earlier, but the @spaces.GPU decorator doesn't work well with a class that has a model instance as its field. So, can you try instantiating your model in the global scope and changing your __call__ function to a normal function? For example, https://huggingface.co/spaces/hysts/InstructBLIP

hey @nateraw , please let us know if you need a hand with Zero, it's really powerful when it works, enable many concurrent users accessing a powerful GPU

@radames feel free to take a look at that fork...my issue here is that I'm using the same app.py file for colab as well via my github repo. So should work for both and preferably not have dependency issue. Perhaps we do a "try: import spaces" situation at the top, then conditionally wrap a second function around the call if it was successful? idk.

Regarding the spaces package, it's available on PyPI, so maybe you can just add it to your requirements.txt. Also, the @spaces.GPU decorator does nothing on non-ZeroGPU environment so that people can duplicate ZeroGPU Spaces to use privately with paid hardwares and test them in their local environment without modifying the code.

I can't seem to find the source code online. PyPi links to huggingface_hub for the repository link. I'd rather not add a package to my requirements.txt and have folks install it if the code isn't publicly available. πŸ™

I have a fix that I'm pretty sure will work either way, provided wrapping my __call__ with a @spaces.GPUdecorated fn works. :)

I can't seem to find the source code online. PyPi links to huggingface_hub for the repository link. I'd rather not add a package to my requirements.txt and have folks install it if the code isn't publicly available.

Ah, yeah, the original repo seems to be private. cc @cbensimon
But in case you missed it, the source code of the spaces package is available on PyPI. You can find it in https://pypi.org/project/spaces/#files.

Sign up or log in to comment