Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Runtime memory policy for multireducers #1756

Open
tomstitt opened this issue Oct 22, 2024 · 0 comments
Open

Runtime memory policy for multireducers #1756

tomstitt opened this issue Oct 22, 2024 · 0 comments
Assignees
Labels
API/usability reviewed Mark with this label when issue has been discussed by team

Comments

@tomstitt
Copy link
Member

Is your feature request related to a problem? Please describe.

The RAJA::hip_multi_reduce_atomic and RAJA::cuda_multi_reduce_atomic multireducers allocate GPU memory even when they are only used in a CPU kernel. Our application supports a runtime compute policy with GPU builds such that we can run CPU-only if desired; with multireducers this breaks because in CPU mode we now allocate GPU memory.

Describe the solution you'd like

We would like the GPU multireducers to dynamically choose their allocator based on the the kernel they are captured by, similar to the regular GPU RAJA reducers like RAJA::hip_reduce and RAJA::cuda_reduce.

Describe alternatives you've considered

We've considered templating routines where we use multireducers with a sequential dispatch for the CPU and a platform-dependent dispatch for the GPU but that requires additional boilerplate.

Additional context

n/a

@rhornung67 rhornung67 added API/usability reviewed Mark with this label when issue has been discussed by team labels Oct 29, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
API/usability reviewed Mark with this label when issue has been discussed by team
Projects
None yet
Development

No branches or pull requests

3 participants