Incorrect Output

#2
by RayyanAkhtar - opened

I am trying to run the onnx model on CPU from this repository with Python3.11 and onnxruntime-qnn=1.19.0 on Windows Platform on Snapdragon 8cx Gen 3 Processor (Windows Dev kit 2023).

Code:

options.add_session_config_entry("session.disable_cpu_ep_fallback", "0")

options.graph_optimization_level = onnxruntime.GraphOptimizationLevel.ORT_ENABLE_EXTENDED
self.session = onnxruntime.InferenceSession(path,
sess_options=options,
providers=["QNNExecutionProvider"],
provider_options=[{"backend_path": "QnnCpu.dll")

outputs = self.session.run(self.output_names, {self.input_names[0]: input_tensor})

Environment:

QNN_SDK_ROOT=C:\Qualcomm\AIStack\QAIRT\2.22.0.240425

Output:

keyboard_xlsr_cpuqdq.jpg

Qualcomm org

I ran python demo.py --target-runtime onnx --on-device and verified that the output image is correct. You may need to permute the output. See

https://github.com/quic/ai-hub-models/blob/main/qai_hub_models/models/_shared/super_resolution/app.py#L67

def torch_tensor_to_PIL_image(data: torch.Tensor) -> Image:
    """
    Convert a Torch tensor (dtype float32) with range [0, 1] and shape CHW into PIL image CHW
    """
    out = torch.clip(data, min=0.0, max=1.0)
    np_out = (out.permute(1, 2, 0).detach().numpy() * 255).astype(np.uint8)
    return ImageFromArray(np_out)

Sign up or log in to comment