Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove duplicated host buffer implementations from oneDPL code #1452

Closed
wants to merge 13 commits into from

Conversation

SergeyKopienko
Copy link
Contributor

@SergeyKopienko SergeyKopienko commented Mar 19, 2024

The goals of this PR:

  • to have only one implementation of host __buffer for all backend (serial, tbb and omp) due avoid code duplications.

At this moment our three implementations of host __buffer is absolutly the same excepting the allocators, which they are using:

  • serial backend implementation - using std::allocator;
  • tbb backend implementation - using tbb::tbb_allocator;
  • omp backend implementation - using std::allocator.

In this PR we are not going to break these rules, but will specify using allocator by additional template param of `__buffer' implementation class.

Implementation details

Now we have only one class __buffer_impl in oneDPL code and it's renamed to class __buffer_impl_host:

template <typename _ExecutionPolicy, typename _Tp, typename _TAllocator>
class __buffer_impl_host
{
    // ...
};

The definition of __buffer template alias inside each of host backends looks like

template <typename _BackendOrDispatchTag, typename _ExecutionPolicy, typename _Tp,
          typename _TAllocator = typename oneapi::dpl::__internal::__backend_buffer_allocator_selector<
              _Tp, _BackendOrDispatchTag>::allocator_type>
using __buffer = oneapi::dpl::__utils::__buffer_impl_host<::std::decay_t<_ExecutionPolicy>, _Tp, _TAllocator>;

One important detail here - in the template param _BackendOrDispatchTag we are ready to receive:

  • our host backend tags: __serial_backend_tag, __tbb_backend_tag and __omp_backend_tag;
  • our host dispatch tags: __serial_tag, __parallel_tag and __parallel_forward_tag.

Also now we can specify which allocator we will use in this buffer through template param _TAllocator :

  • for serial and omp backends it's ::std::allocator;
  • for tbb backend it's tbb::tbb_allocator.

For support tag dispatching and use different allocators the the host buffer now we using the type allocator_type from __backend_buffer_allocator_selector structure specializations :

template <typename _T>
struct __backend_buffer_allocator_selector<_T, oneapi::dpl::__internal::__serial_backend_tag>
{
    using allocator_type = ::std::allocator<_T>;
};

template <typename _T>
struct __backend_buffer_allocator_selector<_T, oneapi::dpl::__internal::__omp_backend_tag>
{
    using allocator_type = ::std::allocator<_T>;
};

template <typename _T>
struct __backend_buffer_allocator_selector<_T, oneapi::dpl::__internal::__tbb_backend_tag>
{
    using allocator_type = tbb::tbb_allocator<_T>;
};

This structure __backend_buffer_allocator_selector also has specializations for resolve dispatching tags into buffer allocators:

template <typename _T, typename _IsVector>
struct __backend_buffer_allocator_selector<_T, oneapi::dpl::__internal::__serial_tag<_IsVector>>
{
    using allocator_type = ::std::allocator<_T>;
};

template <typename _T, typename _IsVector>
struct __backend_buffer_allocator_selector<_T, oneapi::dpl::__internal::__parallel_tag<_IsVector>>
{
    using allocator_type = typename __backend_buffer_allocator_selector<
        _T, typename oneapi::dpl::__internal::__parallel_tag<_IsVector>::__backend_tag>::allocator_type;
};

template <typename _T>
struct __backend_buffer_allocator_selector<_T, oneapi::dpl::__internal::__parallel_forward_tag>
{
    using allocator_type = typename __backend_buffer_allocator_selector<
        _T, typename oneapi::dpl::__internal::__parallel_forward_tag::__backend_tag>::allocator_type;
};

As result now we have only one implementation of host __buffer with different using allocators which depends on or .

@SergeyKopienko SergeyKopienko force-pushed the dev/skopienko/host_buffer_impl_refacotring branch from 5b08a0e to 652f970 Compare March 19, 2024 11:40
Base automatically changed from dev/skopienko/tag_dispatching to main March 20, 2024 13:28
@SergeyKopienko SergeyKopienko force-pushed the dev/skopienko/host_buffer_impl_refacotring branch 2 times, most recently from 5c65b8a to 0ebe195 Compare March 21, 2024 14:51
@SergeyKopienko SergeyKopienko force-pushed the dev/skopienko/host_buffer_impl_refacotring branch from fb97fa1 to fe9eedf Compare March 22, 2024 13:36
@SergeyKopienko SergeyKopienko force-pushed the dev/skopienko/host_buffer_impl_refacotring branch 2 times, most recently from 04bb28d to 350ee9c Compare April 5, 2024 15:15
@SergeyKopienko SergeyKopienko force-pushed the dev/skopienko/host_buffer_impl_refacotring branch 8 times, most recently from f3e20ef to fcad11f Compare April 10, 2024 07:37
Sergey Kopienko added 11 commits April 10, 2024 10:14
…add static_assert(::std::is_base_of_v<__device_backend_tag, _BackendTag>) into class __buffer_impl_hetero
…_tag) into include/oneapi/dpl/pstl/parallel_backend_serial.h and remove from forward declarations in include/oneapi/dpl/pstl/execution_defs.h
…_tag) into include/oneapi/dpl/pstl/parallel_backend_serial.h and remove from forward declarations in include/oneapi/dpl/pstl/execution_defs.h - fix compile error by introducing oneapi::dpl::__internal::__serial_buffer_allocator type
…eapi::dpl::__internal’; did you mean ‘__get_buffer_allocator’?
@SergeyKopienko SergeyKopienko force-pushed the dev/skopienko/host_buffer_impl_refacotring branch from fcad11f to 3057616 Compare April 10, 2024 08:15
@SergeyKopienko
Copy link
Contributor Author

Required additional design work.

@SergeyKopienko SergeyKopienko deleted the dev/skopienko/host_buffer_impl_refacotring branch April 10, 2024 15:00
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant