The model was converted from PyTorch weights using standard ONNX methods. The pipeline for conversion and checking can be found here.

model.onnx is an optimized version of the exported model. model_quant.onnx is a quantized optimized model.

I checked the difference between the original predictions and the ones from the exported models on a portion of the scene_parse_150 dataset:

  1. model.onnx - the absolute difference is 127.364 on a 10x150x128x128 matrix, which looks reasonable. The effective difference (difference in per-pixel class predictions) is 0, which is good.
  2. model_quant.onnx - the absolute difference is 7876547, which is huge. But the effective difference is only 182, which means that the classes are different for only 182 of 10x128x128 pixels (0.1%).
Ready to merge
This branch is ready to get merged automatically.

Sign up or log in to comment