Apply for community grant: Academic project (gpu)

#1
by MykolaL - opened
Owner

StableDesign is a SOTA deep learning model that transforms images of empty rooms into fully furnished spaces based on text descriptions. This model pipeline won 2nd place in the Generative Interior Design 2024 competition (https://www.aicrowd.com/challenges/generative-interior-design-challenge-2024/leaderboards?challenge_round_id=1314) and outperform currently best pipeline for same task (https://huggingface.co/spaces/ml6team/controlnet-interior-design).
This demo would benefit greatly from GPU hardware for better interactivity and user experience.
Also it will be great to have GPU with 24 Gb VRAM to give the opportunity to deploy model for high resolution images.
Please refer to our github (https://github.com/Lavreniuk/generative-interior-design) for more details.

Hi @MykolaL , we've assigned ZeroGPU to this Space. Please check the compatibility and usage sections of this page so your Space can run on ZeroGPU.

Owner

Hi, @hysts , thank you very much!
however I have a problem with utilizing GPU. Locally it works good, but in space I faced next issue:
device = 'cuda'
but after self.pipe = self.pipe.to(device)
or any other models they are still on cpu, but I could print and see that device:0 in A100 gpu. At the same time when I have image "image_to_depth" it moved to gpu...
could you please help with it?

@MykolaL Thanks for testing ZeroGPU! Can you try instantiating your models in the global scope without using a class? ZeroGPU may not work well with classes that have a model as a field.

Owner

@hysts , thank you for the advice, now it is working perfect!

Awesome!!

what an amazing project, congrats

Owner

Dear @hysts , I have faced the issue with veeery long loading the page with this space. When I restart the space it is much better, but after a few hours problems come back. So I need to restart the app several times per day. Maybe you could help me with this problem?

image.png

@MykolaL Sorry about this issue. It's a known issue of gradio 4.25.0 and ZeroGPU, which is fixed in gradio==4.26.0. So upgrading the gradio version should fix it. You can change the gradio version in the README.md. https://huggingface.co/spaces/MykolaL/StableDesign/blob/f17a90ea63e1f6896e9fd2859c861b12aab31b0c/README.md?code=true#L7

Owner

@hysts , thank you for quick response, I will try it!

I'm seeing this error:

  File "/usr/local/lib/python3.10/site-packages/gradio/helpers.py", line 55, in create_examples
    examples_obj = Examples(
  File "/usr/local/lib/python3.10/site-packages/gradio/helpers.py", line 153, in __init__
    raise ValueError("If caching examples, `fn` and `outputs` must be provided")
ValueError: If caching examples, `fn` and `outputs` must be provided

I think it's related to this issue and you can avoid this error by adding cache_examples=False in gr.Examples for now. https://huggingface.co/spaces/MykolaL/StableDesign/blob/6369e62ee776689bccb4b429f96c49030e974acc/app.py#L325-L326

Owner

@hysts , could you please help me with solving the error that I surprisingly faced today without changing anything.

Hmm, the error is

===== Application Startup at 2024-07-03 22:12:19 =====

The cache for model files in Transformers v4.22.0 has been updated. Migrating your old cache. This is a one-time only operation. You can interrupt this and resume the migration later on by calling `transformers.utils.move_cache()`.


0it [00:00, ?it/s]
0it [00:00, ?it/s]
Traceback (most recent call last):
  File "/usr/local/lib/python3.10/site-packages/httpx/_transports/default.py", line 69, in map_httpcore_exceptions
    yield
  File "/usr/local/lib/python3.10/site-packages/httpx/_transports/default.py", line 233, in handle_request
    resp = self._pool.handle_request(req)
  File "/usr/local/lib/python3.10/site-packages/httpcore/_sync/connection_pool.py", line 216, in handle_request
    raise exc from None
  File "/usr/local/lib/python3.10/site-packages/httpcore/_sync/connection_pool.py", line 196, in handle_request
    response = connection.handle_request(
  File "/usr/local/lib/python3.10/site-packages/httpcore/_sync/connection.py", line 101, in handle_request
    return self._connection.handle_request(request)
  File "/usr/local/lib/python3.10/site-packages/httpcore/_sync/http11.py", line 143, in handle_request
    raise exc
  File "/usr/local/lib/python3.10/site-packages/httpcore/_sync/http11.py", line 113, in handle_request
    ) = self._receive_response_headers(**kwargs)
  File "/usr/local/lib/python3.10/site-packages/httpcore/_sync/http11.py", line 186, in _receive_response_headers
    event = self._receive_event(timeout=timeout)
  File "/usr/local/lib/python3.10/site-packages/httpcore/_sync/http11.py", line 224, in _receive_event
    data = self._network_stream.read(
  File "/usr/local/lib/python3.10/site-packages/httpcore/_backends/sync.py", line 124, in read
    with map_exceptions(exc_map):
  File "/usr/local/lib/python3.10/contextlib.py", line 153, in __exit__
    self.gen.throw(typ, value, traceback)
  File "/usr/local/lib/python3.10/site-packages/httpcore/_exceptions.py", line 14, in map_exceptions
    raise to_exc(exc) from exc
httpcore.ReadTimeout: timed out

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/home/user/app/app.py", line 79, in <module>
    def segment_image(
  File "/usr/local/lib/python3.10/site-packages/spaces/zero/decorator.py", line 79, in GPU
    return _GPU(task, duration)
  File "/usr/local/lib/python3.10/site-packages/spaces/zero/decorator.py", line 111, in _GPU
    client.startup_report()
  File "/usr/local/lib/python3.10/site-packages/spaces/zero/client.py", line 39, in startup_report
    while (status := client.startup_report()) is httpx.codes.NOT_FOUND: # pragma: no cover
  File "/usr/local/lib/python3.10/site-packages/spaces/zero/api.py", line 80, in startup_report
    res = self.client.post('/startup-report')
  File "/usr/local/lib/python3.10/site-packages/httpx/_client.py", line 1145, in post
    return self.request(
  File "/usr/local/lib/python3.10/site-packages/httpx/_client.py", line 827, in request
    return self.send(request, auth=auth, follow_redirects=follow_redirects)
  File "/usr/local/lib/python3.10/site-packages/httpx/_client.py", line 914, in send
    response = self._send_handling_auth(
  File "/usr/local/lib/python3.10/site-packages/httpx/_client.py", line 942, in _send_handling_auth
    response = self._send_handling_redirects(
  File "/usr/local/lib/python3.10/site-packages/httpx/_client.py", line 979, in _send_handling_redirects
    response = self._send_single_request(request)
  File "/usr/local/lib/python3.10/site-packages/httpx/_client.py", line 1015, in _send_single_request
    response = transport.handle_request(request)
  File "/usr/local/lib/python3.10/site-packages/httpx/_transports/default.py", line 232, in handle_request
    with map_httpcore_exceptions():
  File "/usr/local/lib/python3.10/contextlib.py", line 153, in __exit__
    self.gen.throw(typ, value, traceback)
  File "/usr/local/lib/python3.10/site-packages/httpx/_transports/default.py", line 86, in map_httpcore_exceptions
    raise mapped_exc(message) from exc
httpx.ReadTimeout: timed out
The cache for model files in Transformers v4.22.0 has been updated. Migrating your old cache. This is a one-time only operation. You can interrupt this and resume the migration later on by calling `transformers.utils.move_cache()`.


0it [00:00, ?it/s]
0it [00:00, ?it/s]

Did you try restarting your Space?
Not sure, but it seems to be due to some temporary infra issue or something and I guess it can be fixed by restarting. I'll restart the Space and let's see what happens.

It worked!

Owner

I have tried to restart but it did not help.
Yes, now it works, thank you!

Sign up or log in to comment