You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Error is: RuntimeError: Expected 4-dimensional input for 4-dimensional weight [384, 512, 1, 1], but got 3-dimensional input of size [1, 1984, 512] instead
Also, my data fed to transformer is of size torch.Size([1983, 512]) and my batch size is 1.
Full log is:
$ bash scripts/train.sh
train: True test: False cam: False
preparing datasets and dataloaders......
total_train_num: 176
creating models......
n_class: 2
in_dim: 512
value dim: 64
chan out: 512
kernel_size: 1
out_conv_kwargs: {'padding': 0}
in_chan: 768
in_dim: 512
value dim: 64
chan out: 512
kernel_size: 1
out_conv_kwargs: {'padding': 0}
in_chan: 768
=>Epoches 1, learning rate = 0.0010000, previous best = 0.0000
torch.Size([1983, 512])
features size: torch.Size([1983, 512])
/SeaExp/mona/venv/dpcc/lib/python3.8/site-packages/torch/optim/lr_scheduler.py:129: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate
warnings.warn("Detected call of `lr_scheduler.step()` before `optimizer.step()`. "
/SeaExp/mona/venv/dpcc/lib/python3.8/site-packages/torch/optim/lr_scheduler.py:154: UserWarning: The epoch parameter in `scheduler.step()` was not necessary and is being deprecated where possible. Please use `scheduler.step()` to step the scheduler. During the deprecation, if epoch is different from None, the closed form is used instead of the new chainable form, where available. Please open an issue if you are unable to replicate your use case: https://github.com/pytorch/pytorch/issues/new/choose.
warnings.warn(EPOCH_DEPRECATION_WARNING, UserWarning)
max_feature_num: 1983
batch feature size: torch.Size([1, 1983, 512])
x.shape: torch.Size([1, 1984, 512])
*x.shape is: 1 1984 512
heads: 12
Traceback (most recent call last):
File "main.py", line 148, in <module>
preds,labels,loss = trainer.train(sample_batched, model)
File "/SeaExp/mona/research/code/cc/helper.py", line 71, in train
pred,labels,loss = model.forward(feats, labels, masks)
File "/SeaExp/mona/venv/dpcc/lib/python3.8/site-packages/torch/nn/parallel/data_parallel.py", line 166, in forward
return self.module(*inputs[0], **kwargs[0])
File "/SeaExp/mona/venv/dpcc/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/SeaExp/mona/research/code/cc/models/Transformer.py", line 31, in forward
out = self.transformer(X)
File "/SeaExp/mona/venv/dpcc/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/SeaExp/mona/research/code/cc/models/linear_att_ViT.py", line 262, in forward
feat = self.transformer(emb)
File "/SeaExp/mona/venv/dpcc/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/SeaExp/mona/research/code/cc/models/linear_att_ViT.py", line 206, in forward
out = layer(out)
File "/SeaExp/mona/venv/dpcc/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/SeaExp/mona/research/code/cc/models/linear_att_ViT.py", line 174, in forward
out = self.attn(out)
File "/SeaExp/mona/venv/dpcc/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/SeaExp/mona/research/code/cc/models/linear_att_ViT.py", line 92, in forward
q, k, v = (self.to_q(x), self.to_k(x), self.to_v(x))
File "/SeaExp/mona/venv/dpcc/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/SeaExp/mona/venv/dpcc/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 443, in forward
return self._conv_forward(input, self.weight, self.bias)
File "/SeaExp/mona/venv/dpcc/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 439, in _conv_forward
return F.conv2d(input, weight, bias, self.stride,
RuntimeError: Expected 4-dimensional input for 4-dimensional weight [384, 512, 1, 1], but got 3-dimensional input of size [1, 1984, 512] instead
How can I fix this error? I am calling the ImageSelfAttention as following in the Encoder block of the Vision Transformer:
class EncoderBlock(nn.Module):
def __init__(self, in_dim, mlp_dim, num_heads, dropout_rate=0.1, attn_dropout_rate=0.1):
super(EncoderBlock, self).__init__()
self.norm1 = nn.LayerNorm(in_dim)
#self.attn = SelfAttention(in_dim, heads=num_heads, dropout_rate=attn_dropout_rate)
## note Mona: not sure if I am correctly passing the params
# what about attn_dropout_rate=0.1
## I don't know
print('in_dim: ', in_dim)
self.attn = ImageLinearAttention(chan=in_dim, heads=num_heads, key_dim=32)
if dropout_rate > 0:
self.dropout = nn.Dropout(dropout_rate)
else:
self.dropout = None
self.norm2 = nn.LayerNorm(in_dim)
self.mlp = MlpBlock(in_dim, mlp_dim, in_dim, dropout_rate)
def forward(self, x):
residual = x
out = self.norm1(x)
out = self.attn(out)
if self.dropout:
out = self.dropout(out)
out += residual
residual = out
out = self.norm2(out)
out = self.mlp(out)
out += residual
return out
The text was updated successfully, but these errors were encountered:
When I am replacing
ImageLinearAttention
withSelfAttention
inVision Transformer
, with the code as follows, I get a RuntimeError. The code forImageLinearAttention
is from https://github.com/lucidrains/linear-attention-transformer/blob/master/linear_attention_transformer/images.py except I removed number of channels as you see in commented code.Error is:
RuntimeError: Expected 4-dimensional input for 4-dimensional weight [384, 512, 1, 1], but got 3-dimensional input of size [1, 1984, 512] instead
Also, my data fed to transformer is of size torch.Size([1983, 512]) and my batch size is 1.
Full log is:
The original
SelfAttention
code is:How can I fix this error? I am calling the
ImageSelfAttention
as following in the Encoder block of the Vision Transformer:The text was updated successfully, but these errors were encountered: