Skip to content

Commit

Permalink
Ignore pin_memory if cuda is not available
Browse files Browse the repository at this point in the history
Differential Revision: D68357863

Pull Request resolved: #331
  • Loading branch information
vbourgin authored Jan 23, 2025
1 parent ca1389a commit 686c00a
Showing 1 changed file with 9 additions and 1 deletion.
10 changes: 9 additions & 1 deletion src/spdl/dataloader/_pytorch_dataloader.py
Original file line number Diff line number Diff line change
Expand Up @@ -323,7 +323,15 @@ def get_pytorch_dataloader(

from torch.utils.data._utils.pin_memory import pin_memory as pin_memory_fn

transfer_fn = pin_memory_fn if pin_memory else None
if pin_memory and not torch.cuda.is_available():
_LG.warning(
"'pin_memory' argument is set as true but no accelerator is found, "
"then device pinned memory won't be used."
)

transfer_fn = (
pin_memory_fn if pin_memory and torch.accelerator.is_available() else None
)

mp_ctx = (
multiprocessing_context
Expand Down

0 comments on commit 686c00a

Please sign in to comment.