Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

An error occurs when running the code #3

Open
ideasplus opened this issue Jul 26, 2023 · 9 comments
Open

An error occurs when running the code #3

ideasplus opened this issue Jul 26, 2023 · 9 comments

Comments

@ideasplus
Copy link

Hi, there

When I try to run your codes on the Semantic-KITTI dataset, I met an error as follows

Reusing positional embeddings.
Traceback (most recent call last):
  File "main.py", line 334, in <module>
    exp = Experiment(settings)
  File "main.py", line 80, in __init__
    self.model = self._initModel()
  File "main.py", line 91, in _initModel
    model = build_rangevit_model(
  File "main.py", line 32, in build_rangevit_model
    model = models.RangeViT(
  File "/data/rangevit/models/rangevit.py", line 347, in __init__
    resized_pos_emb = resize_pos_embed(pretrained_state_dict['encoder.pos_embed'],
KeyError: 'encoder.pos_embed'

Could you tell me how to solve this problem?

@angelika1108
Copy link
Collaborator

angelika1108 commented Jul 31, 2023

Hi!
Thank you for your question. Which pre-trained weights are you loading in the model? If you are training from scratch, you can set the variable reuse_pos_emb: false in the config file.

@ideasplus
Copy link
Author

ideasplus commented Aug 2, 2023

Hi! Thank you for your question. Which pre-trained weights are you loading in the model? If you are training from scratch, you can set the variable reuse_pos_emb: false in the config file.

Hi

I load the pre-trained model model_skitti_train_cs_init_h128.pth, downloaded according to your provided link.

Init a recoder at  trained_models/log_exp_kitti
Loading pretrained parameters from pretrained_models/model_skitti_train_cs_init_h128.pth
Loading pretrained parameters from pretrained_models/model_skitti_train_cs_init_h128.pth
Loading pretrained parameters from pretrained_models/model_skitti_train_cs_init_h128.pth
Loading pretrained parameters from pretrained_models/model_skitti_train_cs_init_h128.pth
dict_keys(['model', 'epoch'])

@gidariss
Copy link
Collaborator

gidariss commented Aug 4, 2023

Hi @ideasplus. Can you share the command that you are running for using the code?

@ideasplus
Copy link
Author

Hi @ideasplus. Can you share the command that you are running for using the code?

Sure, I use the following command:
python -m torch.distributed.launch --nproc_per_node=4 --master_port=63545 --use_env main.py 'config_kitti.yaml' ./semantic-kitti/sequences/ --save_path trained_models/ --pretrained_model pretrained_models/model_skitti_train_cs_init_h128.pth

@illuosion
Copy link

Hello, I have encountered the same problem as you, have you solved it

Hi, there

When I try to run your codes on the Semantic-KITTI dataset, I met an error as follows

Reusing positional embeddings.
Traceback (most recent call last):
  File "main.py", line 334, in <module>
    exp = Experiment(settings)
  File "main.py", line 80, in __init__
    self.model = self._initModel()
  File "main.py", line 91, in _initModel
    model = build_rangevit_model(
  File "main.py", line 32, in build_rangevit_model
    model = models.RangeViT(
  File "/data/rangevit/models/rangevit.py", line 347, in __init__
    resized_pos_emb = resize_pos_embed(pretrained_state_dict['encoder.pos_embed'],
KeyError: 'encoder.pos_embed'

Could you tell me how to solve this problem?

@wenyi-li
Copy link

Hi! Thank you for your question. Which pre-trained weights are you loading in the model? If you are training from scratch, you can set the variable reuse_pos_emb: false in the config file.

I have encountered the same error KeyError: 'encoder.pos_embed' after setting the variable reuse_pos_emb: false

@gidariss
Copy link
Collaborator

gidariss commented Nov 6, 2023

python -m torch.distributed.launch --nproc_per_node=4 --master_port=63545 --use_env main.py 'config_kitti.yaml' ./semantic-kitti/sequences/ --save_path trained_models/ --pretrained_model pretrained_models/model_skitti_train_cs_init_h128.pth

Hello. The --pretrained_model argument expects a pre-trained image backbone, which will be used for initializing the ViT encoder inside RangeViT. You can download the pre-trained weights for these image backbones from here:

In particular, we initialize RangeViT’s backbone with ViTs pretrained (a) on supervised ImageNet21k classification and fine-tuned on supervised image segmentation on Cityscapes with Segmenter (entry Cityscapes) (b) on supervised ImageNet21k classification (entry IN21k), (c) with the DINO self-supervised approach on ImageNet1k (entry DINO), and (d) trained from scratch (entry Random). The Cityscapes pre-trained ViT encoder weights can be downloaded from here.

In the command that you sent, it seems that you give as a path a pre-trained RangeViT model, which is not what the training scripts expects in this --pretrained_model argument. Also, there is no need to train this already pre-trained RangeViT model. You can directly evaluate it by running:

python -m torch.distributed.launch --nproc_per_node=1 --master_port=63545 \
    --use_env main.py 'config_nusc.yaml' \
    --data_root './semantic-kitti/sequences/' \
    --save_path '<path_to_log>' \
    --checkpoint 'pretrained_models/model_skitti_train_cs_init_h128.pth' \
    --val_only

@liam-sbhoo
Copy link

Hi @gidariss

I am trying to run evaluation of the pre-trained RangeViT model on nuScenes dataset. However, it seems the pre-trained RangeVit model doesn't provide key named "epoch" in it, which is needed in _loadCheckpoint. I suppose I can set it to an arbitrary number, right?

@jodyngo
Copy link

jodyngo commented Jun 28, 2024

@gidariss please update the link of pretrained image backbone so this issue can be solved (The Cityscapes pre-trained ViT encoder weights can be downloaded from here

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants