submission takes forever

#4
by Ilayk - opened

Hi,
evaluating the results on the audio task takes a very long time. when I run it on my laptop it takes less than one minute...
I'm not sure if I submit the files properly. the audio.py should have an explicit call to evaluate_audio function or not? I tried them both but to make sure I'm submitting the correct format?

thx

Frugal AI Challenge org

Hello @Ilayk , thanks for pointing it out,

It may come from two sources :

  • It can timeout from the portal directly, I just added a queue to avoid it
  • Or it can timeout from your submission API directly. because the compute is not done by the portal but simply calls the evaluation of the API you deployed, so the timeout error will be visible on the log of your space. If so can you tell me the error

In any case, the public leaderboard is more informative than to serve as a final ranking. So you can also just use the submission form https://framaforms.org/2025-frugal-ai-challenge-submission-form-1736883260-0 to submit your model for final evaluation in the private leaderboard and then simply use the eval functions locally to guide your iterations :)

Hi Theo,
thanks for the response.
regarding the second option - when I run the evaluation function locally, there is no problem (and it takes ~1 minute). I also don't see errors in the log of the space. this is what it looks like after 25 minutes:
===== Application Startup at 2025-01-27 17:58:54 =====

[codecarbon WARNING @ 17:59:45] Multiple instances of codecarbon are allowed to run at the same time.
[codecarbon INFO @ 17:59:45] [setup] RAM Tracking...
[codecarbon INFO @ 17:59:45] [setup] CPU Tracking...
[codecarbon WARNING @ 17:59:45] No CPU tracking mode found. Falling back on CPU constant mode.
Linux OS detected: Please ensure RAPL files exist at \sys\class\powercap\intel-rapl to measure CPU

[codecarbon INFO @ 17:59:46] CPU Model on constant consumption mode: Intel(R) Xeon(R) Platinum 8375C CPU @ 2.90GHz
[codecarbon INFO @ 17:59:46] [setup] GPU Tracking...
[codecarbon INFO @ 17:59:46] No GPU found.
[codecarbon INFO @ 17:59:46] >>> Tracker's metadata:
[codecarbon INFO @ 17:59:46] Platform system: Linux-5.10.230-223.885.amzn2.x86_64-x86_64-with-glibc2.36
[codecarbon INFO @ 17:59:46] Python version: 3.9.21
[codecarbon INFO @ 17:59:46] CodeCarbon version: 2.8.2
[codecarbon INFO @ 17:59:46] Available RAM : 123.806 GB
[codecarbon INFO @ 17:59:46] CPU count: 16
[codecarbon INFO @ 17:59:46] CPU model: Intel(R) Xeon(R) Platinum 8375C CPU @ 2.90GHz
[codecarbon INFO @ 17:59:46] GPU count: None
[codecarbon INFO @ 17:59:46] GPU model: None
[codecarbon INFO @ 17:59:49] Saving emissions data to file /app/emissions.csv
INFO: Started server process [1]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://0.0.0.0:7860 (Press CTRL+C to quit)
INFO: 10.16.37.146:13078 - "GET / HTTP/1.1" 200 OK
INFO: 10.16.5.242:12842 - "GET /?logs=container HTTP/1.1" 200 OK
INFO: 10.16.39.228:50236 - "GET /?logs=container HTTP/1.1" 200 OK

it seems there are no errors here.
regarding your last point (submitting to the private leaderboard) - I'm not sure I understand how can I submit without evaluation. I already filled the form several days ago.

Thanks,

Ilay

Frugal AI Challenge org

Mm that's weird, I got a similar error after restarting my submission space (not the portal) it did not time out.
Another way to check is by using directly FastAPI to do the evaluation

image.png

Go the url shown here and add /docs

image.png

You should see

image.png

And use "Try it out" and "execute" on the audio task

image.png

You should get the evaluation results the same way as locally and it would show that your model works in production without even using the portal :)

image.png

btw - I should not submit an explicit call to the evaluation function, as this is part of the evaluation pipeline, right?

Frugal AI Challenge org

And for your question "regarding your last point (submitting to the private leaderboard) - I'm not sure I understand how can I submit without evaluation. I already filled the form several days ago." -> because we actually do the evaluation on our side after the challenge ended. To run everything on the same hardware on a private dataset (but using the same evaluation functions). So the evaluation here and the public leaderboard are mostly informative to position yourself and iterate during the challenge :)

Frugal AI Challenge org

"btw - I should not submit an explicit call to the evaluation function, as this is part of the evaluation pipeline, right?" yes exactly but you can use the evaluation function locally as you'd like !

this is the error I get when using FastAPI:

Screenshot 2025-01-27 211025.png

However, in the space logs, it seems like the evaluation finished:

image.png

So, another try on FastAPI worked:

image.png

however, using the 'Evaluate model' in the submission portal results in error:

image.png

the space logs look normal:

image.png

do you have an idea what can cause this?

thx

Frugal AI Challenge org

I am encountering the same issue. I am unable to submit . it takes a long time and then displays the same error message

Sign up or log in to comment