-
Notifications
You must be signed in to change notification settings - Fork 233
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Conformance] TorchFX/OV backends Alignment #2996
[Conformance] TorchFX/OV backends Alignment #2996
Conversation
61e48e7
to
455a23d
Compare
e39500a
to
6d2ecc0
Compare
Please rerun post_training_quantization build. |
/post_training_quantization/504/ is finished successfully |
Please add unit tests for constant folding and docstrings. |
tested by |
b9fb483
to
c96c91a
Compare
@alexsu52 i will not review PR that adding transformation that will applied for any model in |
c96c91a
to
fa1f8f6
Compare
@AlexanderDokuchaev, please take a look |
434bc55
to
7a8ab33
Compare
In the case where the user gives a model with already inserted Quantize-Dequantize or Quantize-random_nodes-Dequantize, shouldn't it be ignored by constant folding? |
@anzr299, I fixed |
weight_node = get_const(nodes_map["weight"]) | ||
scale_node = get_const(nodes_map["scale"]) | ||
zp_node = get_const(nodes_map["zero_point"]) | ||
axis = nodes_map["axis"] | ||
axis = nodes_map.get("axis") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we can use the same function for axis too.
axis = nodes_map.get("axis") | |
axis = get_const(nodes_map.get("axis")) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nope, per tensor case has no "axis" key, []
raises key error. Intended axis value for per tensor case - None. .get
returns None https://docs.python.org/3/library/stdtypes.html#dict.get
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry about that, I have updated the suggestion. I meant to say that we can pass this also to the get_const
funciton to keep it the same as others
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
Oh, yes that implementation did not consider |
1ccd2bb
to
604933d
Compare
Changes
Constant folding is applied to all TorchFX models before the quantizationtorch.export.export
before ov conversationAfter the #2984
_compress_qdq_constant_transformation
for per tensor caseReason for changes
Related tickets
#2766
Tests
post_training_quantization/504/ is finished successfully