Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question about the code in basic gcn unit #73

Open
daleigehhh opened this issue May 13, 2024 · 0 comments
Open

Question about the code in basic gcn unit #73

daleigehhh opened this issue May 13, 2024 · 0 comments

Comments

@daleigehhh
Copy link

Hi, @abduallahmohamed

When I inspect your code in the model part, in the ConvTemporalGraphical module:
def __init__(self, in_channels, out_channels, kernel_size, t_kernel_size=1, t_stride=1, t_padding=0, t_dilation=1, bias=True): super(ConvTemporalGraphical,self).__init__() self.kernel_size = kernel_size self.conv = nn.Conv2d( in_channels, out_channels, kernel_size=(t_kernel_size, 1), padding=(t_padding, 0), stride=(t_stride, 1), dilation=(t_dilation, 1), bias=bias)
It seems you did not apply convolution on the graph along the temporal dimension?(I am new to GCNs, If I am wrong, just forget about this.). According to the code fo ST-GCN, they seperate the number of the output channles of this unit to k x out_channels, and then do the conv of the feature maps and the A matrices in the k channels:
` self.conv = nn.Conv2d(
in_channels,
out_channels * kernel_size,
kernel_size=(t_kernel_size, 1),
padding=(t_padding, 0),
stride=(t_stride, 1),
dilation=(t_dilation, 1),
bias=bias)

def forward(self, x, A):
    assert A.size(0) == self.kernel_size

    x = self.conv(x)

    n, kc, t, v = x.size()
    x = x.view(n, self.kernel_size, kc//self.kernel_size, t, v)
    x = torch.einsum('nkctv,kvw->nctw', (x, A))

    return x.contiguous(), A` 

Did you do ablation experiments on these settings? Looking forward to your reply, thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant