Report
#6
by
argimenes
- opened
There seems to be a problem with the model as it was still processing around 5 minutes in. NB: may have something to do with my own Billing setup. Although after I set up Billing and created an Access Token I have noticed some images still time out or receive an "Undefined" response.
Hi there, the model does take a while to run inference on CPU (the default). If your computer has a GPU you'll experience shorter times running locally. You may also wish to explore the paid options that Hugging Face provides for running inference with hosted GPU resources, e.g. Gradio Spaces.
Colby
changed discussion status to
closed