What is the recommended way to save the PyTorch model with quantizer information? #1032
Unanswered
RyougiKukoc
asked this question in
Q&A
Replies: 1 comment
-
This is incorrect. The torch.save(quant_model.state_dict(), 'checkpoint.pt')
quant_model.load_state_dict(torch.load('checkpoint.pt')) If this is not what you are trying to do, can you explain it in more detail? |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
As I understand, the state dict of the model does not contain any information about the quantizer (such as scale and zeropoint). So what is the recommended way to save model in QAT? Just export to onnx every time?
Beta Was this translation helpful? Give feedback.
All reactions