First Dense Layer not created by Finn #466
-
Hi everyone, We are trying to transform an encoder model for Pynq-Z2 using Finn framework. However, we have a drawback that we are not able to solve. We are able to build the bitfile for our model but the first layer is discarded and we cannot understand why. Below the steps we follow to transform it:
(we flatten the features before the first Dense layer)
|
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 9 replies
-
Hi Brigilda, Great to hear that you are using FINN for your project and my apologies for the delayed response to your question. I hope this explanation helps in understanding the observed problem. The reason for the first layer being discarded has to do with the input datatype of the first layer. Since the FINN-backend provides support for quantized layers, each of the ONNX layers must be operating on quantized (integer) tensors. In your definition of the quantized input tensor ( In more detail:
In order to solve this, you should add the
For more examples on the FINN compiler and typical end-to-end flows, you could have a look at our extensive notebooks: https://github.com/Xilinx/finn/tree/main/notebooks/end2end_example. |
Beta Was this translation helpful? Give feedback.
Hi Brigilda,
Great to hear that you are using FINN for your project and my apologies for the delayed response to your question.
I hope this explanation helps in understanding the observed problem. The reason for the first layer being discarded has to do with the input datatype of the first layer. Since the FINN-backend provides support for quantized layers, each of the ONNX layers must be operating on quantized (integer) tensors. In your definition of the quantized input tensor (
input_qt
), you have not supplied thebit_width
parameter. As a consequence, the FINN-ONNX export from Brevitas does not annotate the input tensor, meaning that the FINN compiler cannot tell whether the input tenso…