Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Training issue on Multiple Gpu #81

Open
DerrickXuNu opened this issue Feb 20, 2023 · 3 comments
Open

Training issue on Multiple Gpu #81

DerrickXuNu opened this issue Feb 20, 2023 · 3 comments

Comments

@DerrickXuNu
Copy link

Dear authors,

First of all, thanks for your great work and the efforts to opensource this project! I met some issues while training the model. I am able to train on single GPU, but when I give --num_gpus 2 to train.py, it throws me this error:

RuntimeError: Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the same or input should be a MKLDNN tensor and weight is a dense tensor

Do you have any idea how to fix this? Thanks!

@yaoshanliang
Copy link

Same error

@ejhung
Copy link

ejhung commented Mar 1, 2023

I'm also working on reproducing the training step according to the author's manual.
I tried to revise the issue of train.py last week, so not sure my fix would resolve your issue. What I've changed is:

# original
    if opt.num_gpus > 0:
        model = model.cuda()
# revised
    if opt.num_gpus > 0:
        device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
        _model = model.cuda()
        model = nn.DataParallel(_model).to(device)

Hope it helps you.

BTW, leveraging multi gpus with train.py script wasn't working for me since DDP functions are not included in the script. So in my case, I'm trying with train_ddp.py for training with multi gpus, but heading various issues as well.

@happyday-lkj
Copy link

I'm also working on reproducing the training step according to the author's manual. I tried to revise the issue of train.py last week, so not sure my fix would resolve your issue. What I've changed is:

# original
    if opt.num_gpus > 0:
        model = model.cuda()
# revised
    if opt.num_gpus > 0:
        device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
        _model = model.cuda()
        model = nn.DataParallel(_model).to(device)

Hope it helps you.

BTW, leveraging multi gpus with train.py script wasn't working for me since DDP functions are not included in the script. So in my case, I'm trying with train_ddp.py for training with multi gpus, but heading various issues as well.

Hello, I train the model with train_ddp.py for multi gpus, I meet some issues like https://github.com/datvuthanh/HybridNets/issues/90

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants