-
Notifications
You must be signed in to change notification settings - Fork 114
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Resolve register spills in dispatch of __subgroup_radix_sort #1626
Conversation
Signed-off-by: Matthew Michel <[email protected]>
Signed-off-by: Matthew Michel <[email protected]>
Signed-off-by: Matthew Michel <[email protected]>
Signed-off-by: Matthew Michel <[email protected]>
Signed-off-by: Matthew Michel <[email protected]>
Signed-off-by: Matthew Michel <[email protected]>
Signed-off-by: Matthew Michel <[email protected]>
Signed-off-by: Matthew Michel <[email protected]>
include/oneapi/dpl/pstl/hetero/dpcpp/parallel_backend_sycl_radix_sort.h
Outdated
Show resolved
Hide resolved
include/oneapi/dpl/pstl/hetero/dpcpp/parallel_backend_sycl_radix_sort.h
Outdated
Show resolved
Hide resolved
// In __subgroup_radix_sort, we request a sub-group size via _ONEDPL_SYCL_REQD_SUB_GROUP_SIZE_IF_SUPPORTED | ||
// based upon the iters per item. For the below case, register spills that result in runtime exceptions have | ||
// been observed on accelerators that do not support the requested sub-group size of 16. | ||
else if (__n <= 8192 && __wg_size * 8 <= __max_wg_size && __dev_has_sg16) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This avoids single workgroup radix sort in some of the cases which expect / request subgroup size 16, but not all.
To avoid all cases which want sg 16, it looks like we would need to check __dev_has_sg16
in all cases __n > 256
. That is likely overkill, but do we have justification for this being the size cutoff of cases affected by this register overflow error?
In other words, on different hardware, might we see the same error for the __n <= 4096
case or smaller?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have found that in the other cases that request subgroup sizes of 16, using a subgroup size of 32 is safe from register spills. I have looked through the different CUDA architectures and the register file size seems to have remained constant over time, so I believe on NvGPUs it will resolve the issue. I have also verified with sm_75
.
In the case of a general device, I think this is a risk anywhere we use private memory. It is difficult to fully protect against since there is no SYCL check for maximum private memory per group. On some hardware platforms such as Intel GPUs, the registers will spill into global memory and only impact performance. On CUDA devices, it causes a runtime exception.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fair enough, I just wanted to make sure that we had good justification for this choice, rather than this merely being where we have experienced errors.
It may be good to mention this in the comment, that while smaller cases would prefer subgroup size 16 and may end up as 32, they still fit within the register file for hardware we are aware of so that the intention is clear for future maintenance.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have updated the comment
Signed-off-by: Matthew Michel <[email protected]>
Signed-off-by: Matthew Michel <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
On certain accelerators, particularly NvGPUs with
sm_80
andsm_90
, I have observed the following runtime exceptions in our single group radix sort:In the cases causing this crash, we attempt to use a sub-group size of 16 and pass an empty attribute to the kernel for compilation targets that we know do not have this support. The sub-group size of 16 gives a greater portion of register space to each work-item for use. The largest two single work-group case results in register spills on common hardware that does not support size 16 subgroups. I have added a check to query the device's subgroup sizes and skip this case when it occurs.