inference time issues

by xiaoshan-yang - opened

hi , I saw the Inference Time on aihub is around 8.08ms with 8Gen3, but when I run the qai_hub_models.models.stable_diffusion_quantized.demo, each step takes several minutes, and total 5 steps takes very long time.
Is there anything I make it wrong?

Qualcomm org

For stable diffusion, we have posted inference time for every component of the network via AI Hub.

But when we run the demo have to run all the components many times. The entire time of demo includes uploading model as well as data to AI Hub, running the component on device with the specific data and then processing the output. So it's expected that the demo takes around 5 minutes.

Sign up or log in to comment