-
Notifications
You must be signed in to change notification settings - Fork 29
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error with running code on multiple gpu's #7
Comments
If you want to run the models via Data Parallelism you will need to wrap them in |
Yeah, I tired doing that by changing: |
Yeah there is some preprocessing still happening in |
While Trying to train with 4 gpus and arg.multiGPU = True, the following error occurs: torch.nn.modules.module.ModuleAttributeError: 'ModelU' object has no attribute 'lxrt_encoder'
Traceback (most recent call last):
File "hm.py", line 392, in
main()
File "hm.py", line 346, in main
hm = HM()
File "hm.py", line 91, in init
self.model.lxrt_encoder.multi_gpu()
File "/home/samyakxd/miniconda3/envs/vilio_env/lib/python3.7/site-packages/torch/nn/modules/module.py", line 772, in getattr
type(self).name, name))
torch.nn.modules.module.ModuleAttributeError: 'ModelU' object has no attribute 'lxrt_encoder'
Even after commenting out the above line in hm.py file, the model seems to train on a single GPU only according to nvidia-smi.
The text was updated successfully, but these errors were encountered: