Running the onnx model on GPU

#33
by colosimo98 - opened

I get this warning when I try running the onnx model on GPU.

2024-05-14 11:40:16.228364021 [W:onnxruntime:, session_state.cc:1166 VerifyEachNodeIsAssignedToAnEp] Some nodes were not assigned to the preferred execution providers which may or may not have an negative impact on performance. e.g. ORT explicitly assigns shape related ops to CPU to improve perf.
2024-05-14 11:40:16.228395653 [W:onnxruntime:, session_state.cc:1168 VerifyEachNodeIsAssignedToAnEp] Rerunning with verbose output on a non-minimal build will show node assignments.

Currently loading the model as follows:

    providers = [("CUDAExecutionProvider", {"device_id": torch.cuda.current_device()})]
    ort_sess = ort.InferenceSession(model_path, providers=providers)

Did anybody else encountered the same warning or something similar? Also, the .onnx model is much slower than the IS-Net original model, is that expected? Could you please give some more clarifications about it :) Thanks in advance!

Sign up or log in to comment