You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
ONNX MLIR needs to know the output shape. In this specific case, since it's all static, I suspect that filling in the output shape would enable it to work further down. We don't have support for the QLinearAdd as it is not an official op but a MS extension. So it would choke later when attempting to lower that operation.
Best would be to recommend ONNX to add that operation and then we could implement it as part of the standard.
Thank you for explanation. Since custom operator is an officially supported operator in ONNX spec, it seems a good practice to add some level of support in onnx-mlir? Of course, it won't be able to generate any runnable-code since it is unknow the actual function of the customized op. But if the output shape is already there (static model), at least some layers of passes should still work?
I know quantized model isn't supported yet. I would like to confirm this is the symptom due to Dequantize/Quantize operators?
The model I'm testing is ShuffleNet-v2-int8 from here.
Command line:
onnx-mlir --mlir-pass-statistics --mlir-print-ir-after-all --EmitLLVMIR ~/shufflenet-v2-12-int8.onnx
Error:
The text was updated successfully, but these errors were encountered: