"""
    github: https://github.com/Tencent/ncnn/releases/tag/20240820
        - https://github.com/Tencent/ncnn/wiki/how-to-build
        - https://ncnn.readthedocs.io/en/latest/home.html
        - https://ncnn.readthedocs.io/en/latest/how-to-use-and-FAQ/openmp-best-practice.html
        - https://ncnn.readthedocs.io/en/latest/how-to-use-and-FAQ/use-ncnn-with-opencv.html
        - https://ncnn.readthedocs.io/en/latest/how-to-use-and-FAQ/use-ncnn-with-pytorch-or-onnx.html
    博客教程: https://blog.csdn.net/m0_56942491/article/details/141033893
    vulkan依赖: https://vulkan.lunarg.com/sdk/home#windows
        - https://github.com/engineer1109/LearnVulkan

    额外需要尝试的:
        1. https://github.com/xtensor-stack/xtensor
            - https://github.com/xtensor-stack/xtl
            - https://github.com/xtensor-stack/xsimd
"""

"""
测试性质的转换，看文档好像需要手动删除onnx中的某些操作
先下载vulkan: https://vulkan.lunarg.com/sdk/home#windows
在去下载ncnn工具

Fortunately, daquexian developed a handy tool to eliminate them. cheers!

https://github.com/daquexian/onnx-simplifier

python3 -m onnxsim resnet18.onnx resnet18-sim.onnx
onnx to ncnn
Finally, you can convert the model to ncnn using tools/onnx2ncnn

onnx2ncnn resnet18-sim.onnx resnet18.param resnet18.bin
"""

import os

onnx2ncnn = "D:/Development/Envs/ncnn-20240820-windows-vs2019-shared/x64/bin/onnx2ncnn.exe"

command_line = "{} v1_onnx.onnx v1_ncnn.param v1_ncnn.bin".format(onnx2ncnn)
# print(os.system(command_line))

"""
onnx2ncnn may not fully meet your needs. For more accurate and elegant
conversion results, please use PNNX. PyTorch Neural Network eXchange (PNNX) is
an open standard for PyTorch model interoperability. PNNX provides an open model
format for PyTorch. It defines computation graph as well as high level operators
strictly matches PyTorch. You can obtain pnnx through the following ways:
1. Install via python
   pip3 install pnnx
2. Get the executable from https://github.com/pnnx/pnnx
For more information, please refer to https://github.com/pnnx/pnnx

export your torch model to torchscript / onnx
    import torch
    import torchvision.models as models
    
    net = models.resnet18(pretrained=True)
    net = net.eval()
    
    x = torch.rand(1, 3, 224, 224)
    
    # You could try disabling checking when tracing raises error
    # mod = torch.jit.trace(net, x, check_trace=False)
    mod = torch.jit.trace(net, x)
    
    mod.save("resnet18.pt")
    
    # You could also try exporting to the good-old onnx
    torch.onnx.export(net, x, 'resnet18.onnx')

pnnx convert torchscript / onnx to optimized pnnx model and ncnn model files
    ./pnnx resnet18.pt inputshape=[1,3,224,224]
    ./pnnx resnet18.onnx inputshape=[1,3,224,224]

macOS zsh user may need double quotes to prevent ambiguity
    ./pnnx resnet18.pt "inputshape=[1,3,224,224]"
    
For model with multiple inputs, use list
    ./pnnx resnet18.pt inputshape=[1,3,224,224],[1,32]
For model with non-fp32 input data type, add type suffix
    ./pnnx resnet18.pt inputshape=[1,3,224,224]f32,[1,32]i64
pick resnet18_pnnx.py for pnnx-optimized torch model
pick resnet18.ncnn.param and resnet18.ncnn.bin for ncnn inference
"""

pnnx = "D:/Development/Envs/pnnx-20240819-windows/pnnx.exe"
command_line = "{} v1_onnx.onnx inputshape=[1,3,512,512]".format(pnnx)
print(os.system(command_line))