Incorrect Output
I am trying to run the onnx model on CPU from this repository with Python3.11 and onnxruntime-qnn=1.19.0 on Windows Platform on Snapdragon 8cx Gen 3 Processor (Windows Dev kit 2023).
Code:
options.add_session_config_entry("session.disable_cpu_ep_fallback", "0")
options.graph_optimization_level = onnxruntime.GraphOptimizationLevel.ORT_ENABLE_EXTENDED
self.session = onnxruntime.InferenceSession(path,
sess_options=options,
providers=["QNNExecutionProvider"],
provider_options=[{"backend_path": "QnnCpu.dll")
outputs = self.session.run(self.output_names, {self.input_names[0]: input_tensor})
Environment:
QNN_SDK_ROOT=C:\Qualcomm\AIStack\QAIRT\2.22.0.240425
Output:
I ran python demo.py --target-runtime onnx --on-device
and verified that the output image is correct. You may need to permute the output. See
def torch_tensor_to_PIL_image(data: torch.Tensor) -> Image:
"""
Convert a Torch tensor (dtype float32) with range [0, 1] and shape CHW into PIL image CHW
"""
out = torch.clip(data, min=0.0, max=1.0)
np_out = (out.permute(1, 2, 0).detach().numpy() * 255).astype(np.uint8)
return ImageFromArray(np_out)