Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Conv2DTranspose #68

Open
keilalopezbl opened this issue Mar 5, 2018 · 3 comments
Open

Conv2DTranspose #68

keilalopezbl opened this issue Mar 5, 2018 · 3 comments

Comments

@keilalopezbl
Copy link

Hi Jocicmarko, I have this question...

What is exactly doing the Conv2DTranspose layer in your code? I assume that is increasing the dimension by a factor of 2 ( I mean, performing the opposite operation that Maxpooling layer does). Am I correct?

@mrkolarik
Copy link

mrkolarik commented Mar 5, 2018

Hi there,

you are partially correct - Conv2DTranspose is something like the oposite of convolution layer - you can call it deconvolution. Increase of the dimension by factor 2 is done by parameter strides=(2, 2) at each transpose layer, not by the mathematical operation transpose convolution itself. Nice visualization is here: https://datascience.stackexchange.com/questions/6107/what-are-deconvolutional-layers. If I'm wrong, somebody please correct me :)

@keilalopezbl
Copy link
Author

@mrkolarik Thanks for sharing!

@nabsabraham
Copy link

Hi all,
I don't understand why after the Conv2DTranspose operation we are again using Conv2D - this will keep the feature map size but what is the point of doing it? Is it learning in the upsampling process too?

u6 = Conv2DTranspose(256, (2, 2), strides=(2, 2), padding='same') (c5)
u6 = concatenate([u6, c4])
c6 = Conv2D(256, (3, 3), activation='relu', padding='same') (u6)
c6 = Conv2D(256, (3, 3), activation='relu', padding='same') (c6)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants