-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Residual Units question #1258
Comments
Hi @ericspod , I thin this is a FAQ, could you please help share some info? |
Residual units are used to define the layers of the encode and decode path of the network, skip connects are still present and are not substituted in any way. The classical UNet implements each layer as a sequence of convolutions followed by maxpooling in the encode side, and upsampling followed by convolutions in the decode side. With out network with |
Thank you for your quick and clear response. But as you can see from the summary, when num_res_units is 0, the network only has one convolution before the number of features is doubled. Whereas with the residual units there are two plus that unit. So is this intentional / does it still constitute as a 3D-Unet? |
With
The second-last decode layer is
while the final one is just a transpose convolution layer. |
Okay. So without the residual units the network is not equal to a unet? |
Structurally it's the same in that it's an autoencoder architecture with skip connections, but yes is not completely equivalent to the original definition. |
Okay, thank you. I would use BasicUnit instead, but I don't think it is working yet. In any case thank you for the explanation. |
Please cite the paper in the UNet docstring, it uses residual units but we don't have any reference for one that doesn't. |
And what is then the difference between 1 and 2 residual units? Because then you have three convolutions per layer I think. |
The |
Here is the output for
The second-last decode layer is
Here we have the transpose convolution followed by a residual unit with one convolution. I guess this does vary from what I'd said in that the transpose convolution is outside the residual connection, which is needed since the input needs to be upsampled, but each decode layer does have two convolutions only as per the |
I don't completely understand the Residual Units in the UNet.
In my understanding, the Residual Units are used in place of Skip Connections.
My network works fine without the Residual Units. But my question is, how do I report the network architecture without the Residual Units?
Or do the Residual Units serve another purpose?
Also... Without the Residual Units, the network seems to do only one convolution per layers.
(See comment below)
Thank you so much!
Best,
Kirsten
The text was updated successfully, but these errors were encountered: