Skip to content

Custom NN Model Conversion to Brevitas Version #928

Closed Answered by Giuseppe5
Ba1tu3han asked this question in Q&A
Discussion options

You must be logged in to vote

You are right, we need to work on expanding that part of our documentation.

Regarding what layers we support, they are in the folder brevitas/nn, but in general we support all the most commonly used PyTorch layers, including RNN and LSTM.
I will open an issue to add a full list of supported layers to our documentation.

Regarding how to convert floating point layers to quantized one, Brevitas offers the possibility of automatically converting a floating point model to its quantized counterpart, preserving all the weights and structure.

You can take a look at

https://github.com/Xilinx/brevitas/tree/master/src/brevitas_examples/imagenet_classification/ptq/ptq_evaluate.py

To see how we do tha…

Replies: 1 comment 1 reply

Comment options

You must be logged in to vote
1 reply
@Ba1tu3han
Comment options

Answer selected by Ba1tu3han
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants