Spaces:
Running
Pytorch packages compiled with CUDA on Gradio Spaces
Hi,
I just noticed that HF disabled support for Docker spaces for Zero-GPU. You basically broke my space without any notification and without any information on how to fix it again. I'm slightly annoyed.
So I tried making it run again by moving the dockerfile stuff to the requirements.txt and subprocess.run calls. I ran into this issue:
Pytorch and other related libs, most importantly Pytorch-Geometric (pyg) check for available GPUs when installing via PIP. During Gradio space build, there is no GPU available, so they install the CPU version. When running, I get this error RuntimeError: Not compiled with CUDA support
. I tried fixing this by using explicit Pytorch indices in the requirements.txt:
--index-url https://download.pytorch.org/whl/cu124
--extra-index-url=https://pypi.org/simple
But it doesn't work. Any best practices? Any ideas?
Side-Problem: I have the main part of my code in my public Github repo and only boilerplate code on HF. A Git clone call in the main works but a git submodule would be better. However, Gradio spaces don't seem to checkout with submodules. Please add support for recursive submodule checkout!