Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

wrap lambda to support Windows #2286

Merged
merged 4 commits into from
Oct 11, 2023
Merged

wrap lambda to support Windows #2286

merged 4 commits into from
Oct 11, 2023

Conversation

apwojcik
Copy link
Collaborator

@apwojcik apwojcik commented Oct 4, 2023

The following error prevents compilation on Windows with Clang/MSVC. It seems the standard library from MSFT does not support assigning temporal lambdas to class data members.

C:/UIF/MIGraphX/src/targets/cpu/include\migraphx/cpu/dnnl.hpp:311:17: error: no viable overloaded '='
        execute = [=](context&, const std::vector<argument>& args) {
        ~~~~~~~ ^ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
C:/UIF/MIGraphX/src/include\migraphx/operation.hpp:894:32: note: in instantiation of member function 'migraphx::cpu::dnnl_op<migraphx::cpu::dnnl_concat, dnnl::concat>::finalize' requested here
        private_detail_te_self.finalize(ctx, output, input);
                               ^
C:/UIF/MIGraphX/src/include\migraphx/operation.hpp:1152:13: note: in instantiation of function template specialization 'migraphx::operation::private_detail_te_default_finalize<migraphx::cpu::dnnl_concat &>' requested here
            private_detail_te_default_finalize(
            ^
C:/UIF/MIGraphX/src/include\migraphx/operation.hpp:1095:9: note: in instantiation of member function 'migraphx::operation::private_detail_te_handle_type<migraphx::cpu::dnnl_concat>::finalize' requested here
        private_detail_te_handle_type(
        ^
C:\Program Files\Microsoft Visual Studio\2022\Professional\VC\Tools\MSVC\14.37.32822\include\xutility:254:58: note: in instantiation of function template specialization 'migraphx::operation::private_detail_te_handle_type<migraphx::cpu::dnnl_concat>::private_detail_te_handle_type<migraphx::cpu::dnnl_concat>' requested here
        ::new (static_cast<void*>(_STD addressof(_Obj))) _Ty(_STD forward<_Types>(_Args)...);
                                                         ^
C:\Program Files\Microsoft Visual Studio\2022\Professional\VC\Tools\MSVC\14.37.32822\include\memory:2104:18: note: in instantiation of function template specialization 'std::_Construct_in_place<migraphx::operation::private_detail_te_handle_type<migraphx::cpu::dnnl_concat>, migraphx::cpu::dnnl_concat>' requested here
            _STD _Construct_in_place(_Storage._Value, _STD forward<_Types>(_Args)...);
                 ^
C:\Program Files\Microsoft Visual Studio\2022\Professional\VC\Tools\MSVC\14.37.32822\include\memory:2779:26: note: (skipping 1 context in backtrace; use -ftemplate-backtrace-limit=0 to see all)
    const auto _Rx = new _Ref_count_obj2<_Ty>(_STD forward<_Types>(_Args)...);
                         ^
C:/UIF/MIGraphX/src/include\migraphx/operation.hpp:566:20: note: in instantiation of function template specialization 'std::make_shared<migraphx::operation::private_detail_te_handle_type<migraphx::cpu::dnnl_concat>, migraphx::cpu::dnnl_concat>' requested here
              std::make_shared<private_detail_te_handle_type<
                   ^
C:/UIF/MIGraphX/src/include\migraphx/register_op.hpp:64:43: note: in instantiation of function template specialization 'migraphx::operation::operation<migraphx::cpu::dnnl_concat>' requested here
    static auto op_h = detail::op_handler(T{});
                                          ^
C:/UIF/MIGraphX/src/include\migraphx/register_op.hpp:73:9: note: in instantiation of function template specialization 'migraphx::register_op<migraphx::cpu::dnnl_concat>' requested here
        register_op<T>();
        ^
C:/UIF/MIGraphX/src/include\migraphx/auto_register.hpp:36:22: note: in instantiation of function template specialization 'migraphx::register_op_action::apply<migraphx::cpu::dnnl_concat>' requested here
    Action::template apply<T>();
                     ^
C:/UIF/MIGraphX/src/include\migraphx/auto_register.hpp:56:55: note: in instantiation of function template specialization 'migraphx::auto_register_action<migraphx::register_op_action, migraphx::cpu::dnnl_concat>' requested here
const int auto_register<Action, T>::static_register = auto_register_action<Action, T>(); // NOLINT
                                                      ^
C:\Program Files\Microsoft Visual Studio\2022\Professional\VC\Tools\MSVC\14.37.32822\include\functional:1073:15: note: candidate function not viable: no known conversion from '(lambda at C:/UIF/MIGraphX/src/targets/cpu/include\migraphx/cpu/dnnl.hpp:311:19)' to 'const function<argument (context &, const vector<argument> &)>' for 1st argument
    function& operator=(const function& _Right) {
              ^
C:\Program Files\Microsoft Visual Studio\2022\Professional\VC\Tools\MSVC\14.37.32822\include\functional:1089:15: note: candidate function not viable: no known conversion from '(lambda at C:/UIF/MIGraphX/src/targets/cpu/include\migraphx/cpu/dnnl.hpp:311:19)' to 'function<argument (context &, const vector<argument> &)>' for 1st argument
    function& operator=(function&& _Right) noexcept /* strengthened */ {
              ^
C:\Program Files\Microsoft Visual Studio\2022\Professional\VC\Tools\MSVC\14.37.32822\include\functional:1098:15: note: candidate template ignored: requirement 'conjunction_v<std::negation<std::is_same<(lambda at C:/UIF/MIGraphX/src/targets/cpu/include\migraphx/cpu/dnnl.hpp:311:19), std::function<migraphx::argument (migraphx::context &, const std::vector<migraphx::argument, std::allocator<migraphx::argument>> &)>>>, std::_Is_invocable_r<migraphx::argument, (lambda at C:/UIF/MIGraphX/src/targets/cpu/include\migraphx/cpu/dnnl.hpp:311:19) &, migraphx::context &, const std::vector<migraphx::argument, std::allocator<migraphx::argument>> &>>' was not satisfied [with _Fx = (lambda at C:/UIF/MIGraphX/src/targets/cpu/include\migraphx/cpu/dnnl.hpp:311:19)]
    function& operator=(_Fx&& _Func) {
              ^
C:\Program Files\Microsoft Visual Studio\2022\Professional\VC\Tools\MSVC\14.37.32822\include\functional:1110:15: note: candidate function not viable: no known conversion from '(lambda at C:/UIF/MIGraphX/src/targets/cpu/include\migraphx/cpu/dnnl.hpp:311:19)' to 'nullptr_t' (aka 'std::nullptr_t') for 1st argument
    function& operator=(nullptr_t) noexcept {
              ^
C:\Program Files\Microsoft Visual Studio\2022\Professional\VC\Tools\MSVC\14.37.32822\include\functional:1116:15: note: candidate template ignored: could not match 'reference_wrapper<_Fx>' against '(lambda at C:/UIF/MIGraphX/src/targets/cpu/include\migraphx/cpu/dnnl.hpp:311:19)'
    function& operator=(reference_wrapper<_Fx> _Func) noexcept {
              ^
1 error generated.

@apwojcik apwojcik added the Windows Related changes for Windows Environments label Oct 4, 2023
@apwojcik apwojcik requested review from pfultz2 and kahmed10 October 4, 2023 14:08
@codecov
Copy link

codecov bot commented Oct 4, 2023

Codecov Report

Merging #2286 (9b03818) into develop (adfc74a) will increase coverage by 0.00%.
Report is 3 commits behind head on develop.
The diff coverage is n/a.

❗ Current head 9b03818 differs from pull request most recent head 0155fd1. Consider uploading reports for the commit 0155fd1 to get more accurate results

@@           Coverage Diff            @@
##           develop    #2286   +/-   ##
========================================
  Coverage    91.45%   91.45%           
========================================
  Files          433      433           
  Lines        16175    16177    +2     
========================================
+ Hits         14793    14795    +2     
  Misses        1382     1382           

see 6 files with indirect coverage changes

@pfultz2
Copy link
Collaborator

pfultz2 commented Oct 4, 2023

Does it work if we construct the std::function directly?

execute = std::function<argument(context& ctx, const std::vector<argument>& args)>([=](context&, const std::vector<argument>& args) { ... });

We probably want to add a typedef for the function to make it more readable: using execute_function = std::function<argument(context& ctx, const std::vector<argument>& args)>:

execute = execute_function{[=](context&, const std::vector<argument>& args) { ... }};

does not support assigning temporal lambdas to class data members.

If the cause is because of a temporal lambda then you can assign it to a local variable first and then assign the class member:

auto local_execute = [=](context&, const std::vector<argument>& args) { ... };
execute = local_execute;

@@ -95,7 +95,7 @@ template <class Derived, class Primitive>
struct dnnl_op : auto_register_op<Derived>
{
std::vector<post_op> post_ops;
std::function<argument(context& ctx, const std::vector<argument>& args)> execute;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Dont remove the parameter names from the function.

@@ -284,7 +284,7 @@ struct dnnl_op : auto_register_op<Derived>

std::ptrdiff_t output_alias(const std::vector<shape>& shapes) const
{
return shapes.size() - 1;
return static_cast<std::ptrdiff_t>(shapes.size() - 1);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This doesnt seem to make sense, if the cast is to avoid underflow when size() is zero then that wont prevent that. I think you would need to cast the size: static_cast<std::ptrdiff_t>(shapes.size()) - 1.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

MSVC does not perform implicit type conversion here. It is just simply failing compilation. I will recheck it since I made that change some time ago.

@apwojcik
Copy link
Collaborator Author

apwojcik commented Oct 4, 2023

I tried all your suggestions, i.e., casting to std::function<> and creating a local variable before the assignment. The result is the same error message: no viable overload ...

@pfultz2
Copy link
Collaborator

pfultz2 commented Oct 4, 2023

So looking at the error closer I see this:

C:\Program Files\Microsoft Visual Studio\2022\Professional\VC\Tools\MSVC\14.37.32822\include\functional:1098:15: note: candidate template ignored: requirement 'conjunction_v<std::negation<std::is_same<(lambda at C:/UIF/MIGraphX/src/targets/cpu/include\migraphx/cpu/dnnl.hpp:311:19), std::function<migraphx::argument (migraphx::context &, const std::vector<migraphx::argument, std::allocator<migraphx::argument>> &)>>>, std::_Is_invocable_r<migraphx::argument, (lambda at C:/UIF/MIGraphX/src/targets/cpu/include\migraphx/cpu/dnnl.hpp:311:19) &, migraphx::context &, const std::vector<migraphx::argument, std::allocator<migraphx::argument>> &>>' was not satisfied [with _Fx = (lambda at C:/UIF/MIGraphX/src/targets/cpu/include\migraphx/cpu/dnnl.hpp:311:19)]
    function& operator=(_Fx&& _Func) {
              ^

Which looks like it fails because of std::_Is_invocable_r, but its passing the lambda as a non-const reference. Perhaps adding mutable to the lambda will fix it:

execute = [=](context&, const std::vector<argument>& args) mutable { ... }

I dont have access to windows, but if that doesnt fix it, we need to investigate why std::_Is_invocable_r is false. I would try invoking the function directly to see what kind of error there is.

@apwojcik apwojcik changed the title replace lambda assignment with std::bind() replace lambda with functor to support Windows Oct 7, 2023
@apwojcik
Copy link
Collaborator Author

apwojcik commented Oct 7, 2023

I replaced lambda with functor, and it worked.

@apwojcik apwojcik requested a review from pfultz2 October 7, 2023 12:09
@migraphx-bot
Copy link
Collaborator

migraphx-bot commented Oct 7, 2023

Test Batch Rate new
d63559
Rate old
a3cf99
Diff Compare
torchvision-resnet50 64 2,326.27 2,322.40 0.17%
torchvision-resnet50_fp16 64 5,356.01 5,360.94 -0.09%
torchvision-densenet121 32 1,848.11 1,846.99 0.06%
torchvision-densenet121_fp16 32 3,401.90 3,410.65 -0.26%
torchvision-inceptionv3 32 1,294.91 1,295.24 -0.03%
torchvision-inceptionv3_fp16 32 2,535.18 2,536.02 -0.03%
cadene-inceptionv4 16 620.22 619.80 0.07%
cadene-resnext64x4 16 589.00 588.66 0.06%
slim-mobilenet 64 7,211.12 7,207.48 0.05%
slim-nasnetalarge 64 236.28 236.47 -0.08%
slim-resnet50v2 64 2,555.90 2,557.75 -0.07%
bert-mrpc-onnx 8 824.65 824.68 -0.00%
bert-mrpc-tf 1 389.34 388.60 0.19%
pytorch-examples-wlang-gru 1 298.55 300.03 -0.49%
pytorch-examples-wlang-lstm 1 318.48 317.68 0.25%
torchvision-resnet50_1 1 545.68 551.05 -0.97%
torchvision-inceptionv3_1 1 301.05 303.30 -0.74%
cadene-dpn92_1 1 356.48 356.34 0.04%
cadene-resnext101_1 1 220.09 218.29 0.83%
slim-vgg16_1 1 223.80 223.99 -0.08%
slim-mobilenet_1 1 1,519.96 1,506.95 0.86%
slim-inceptionv4_1 1 221.51 214.27 3.38% 🔆
onnx-taau-downsample 1 305.80 306.46 -0.21%
dlrm-criteoterabyte 1 21.71 21.68 0.15%
dlrm-criteoterabyte_fp16 1 40.68 40.75 -0.18%
agentmodel 1 5,827.47 5,812.37 0.26%
unet_fp16 2 55.18 55.12 0.11%
resnet50v1_fp16 1 764.33 757.48 0.90%
bert_base_cased_fp16 64 970.90 970.36 0.05%
bert_large_uncased_fp16 32 305.05 304.99 0.02%
bert_large_fp16 1 167.21 167.50 -0.17%
distilgpt2_fp16 16 1,350.94 1,351.60 -0.05%

Check results before merge 🔆

@migraphx-bot
Copy link
Collaborator


    :white_check_mark:bert-mrpc-onnx: PASSED: MIGraphX meets tolerance

    :white_check_mark:bert-mrpc-tf: PASSED: MIGraphX meets tolerance

    :white_check_mark:pytorch-examples-wlang-gru: PASSED: MIGraphX meets tolerance

    :white_check_mark:pytorch-examples-wlang-lstm: PASSED: MIGraphX meets tolerance

    :white_check_mark:torchvision-resnet50_1: PASSED: MIGraphX meets tolerance

    :white_check_mark:torchvision-inceptionv3_1: PASSED: MIGraphX meets tolerance

    :white_check_mark:cadene-dpn92_1: PASSED: MIGraphX meets tolerance

    :white_check_mark:cadene-resnext101_1: PASSED: MIGraphX meets tolerance

    :white_check_mark:slim-vgg16_1: PASSED: MIGraphX meets tolerance

    :white_check_mark:slim-mobilenet_1: PASSED: MIGraphX meets tolerance

    :white_check_mark:slim-inceptionv4_1: PASSED: MIGraphX meets tolerance

    :white_check_mark:dlrm-criteoterabyte: PASSED: MIGraphX meets tolerance

    :white_check_mark:agentmodel: PASSED: MIGraphX meets tolerance

    :white_check_mark:unet: PASSED: MIGraphX meets tolerance

    :white_check_mark:resnet50v1: PASSED: MIGraphX meets tolerance

🔴bert_base_cased_fp16: FAILED: MIGraphX is not within tolerance - check verbose output


🔴bert_large_uncased_fp16: FAILED: MIGraphX is not within tolerance - check verbose output


    :white_check_mark:bert_large: PASSED: MIGraphX meets tolerance

🔴distilgpt2_fp16: FAILED: MIGraphX is not within tolerance - check verbose output

@pfultz2
Copy link
Collaborator

pfultz2 commented Oct 7, 2023

Can we wrap the lambda in a function object?

template<class F>
struct execute_wrapper
{
    F f;
    argument operator()(context& ctx, const std::vector<argument>& args) const
    {
        return f(ctx, args);
    }
};
template<class F>
execute_wrapper<F> make_execute_wrapper(F f)
{
    return {std::move(f)};
}

Then do:

execute = make_execute_wrapper([=](context&, const std::vector<argument>& args) { ... });

@apwojcik
Copy link
Collaborator Author

apwojcik commented Oct 7, 2023

I tried wrapping lambda in a function object before, which also failed for me. Here's the error message from the piece of code you asked me to try.

[2/141] Building CXX object src/targets/cpu/CMakeFiles/migraphx_cpu.dir/eltwise.cpp.obj
FAILED: src/targets/cpu/CMakeFiles/migraphx_cpu.dir/eltwise.cpp.obj 
C:\opt\rocm\bin\clang++.exe -DMIGRAPHX_BUILD_TESTING -DMSGPACK_DEFAULT_API_VERSION=3 -DMSGPACK_NO_BOOST -D_CRT_SECURE_NO_WARNINGS -D_USE_MATH_DEFINES -Dmigraphx_cpu_EXPORTS -IC:/UIF/AMDMIGraphX/build.release/src/targets/cpu/include -IC:/UIF/AMDMIGraphX/src/targets/cpu/include -IC:/UIF/AMDMIGraphX/src/include -IC:/UIF/AMDMIGraphX/build.release/src/include -isystem C:/UIF/AMDMIGraphX/build.release/_deps/dnnl-build/include -isystem C:/UIF/AMDMIGraphX/build.release/_deps/dnnl-src/src/../include -isystem C:/UIF/AMDMIGraphX/build.release/_deps/half-src/include -isystem C:/UIF/AMDMIGraphX/build.release/_deps/msgpack-src/include -isystem C:/UIF/AMDMIGraphX/build.release/_deps/msgpack-build/include -O3 -DNDEBUG -std=c++17 -D_DLL -D_MT -Xclang --dependent-lib=msvcrt -Wall -Wextra -Wcomment -Wendif-labels -Wformat -Winit-self -Wreturn-type -Wsequence-point -Wswitch -Wtrigraphs -Wundef -Wuninitialized -Wunreachable-code -Wunused -Wno-sign-compare -Wno-reserved-macro-identifier -Weverything -Wshadow -Wno-c++98-compat -Wno-c++98-compat-pedantic -Wno-conversion -Wno-double-promotion -Wno-exit-time-destructors -Wno-extra-semi -Wno-extra-semi-stmt -Wno-float-conversion -Wno-gnu-anonymous-struct -Wno-gnu-zero-variadic-macro-arguments -Wno-missing-prototypes -Wno-nested-anon-types -Wno-option-ignored -Wno-padded -Wno-shorten-64-to-32 -Wno-sign-conversion -Wno-unused-command-line-argument -Wno-weak-vtables -Wno-c99-extensions -fno-sanitize=function,vptr -fms-extensions -fms-compatibility -fdelayed-template-parsing -fopenmp=libomp -MD -MT src/targets/cpu/CMakeFiles/migraphx_cpu.dir/eltwise.cpp.obj -MF src\targets\cpu\CMakeFiles\migraphx_cpu.dir\eltwise.cpp.obj.d -o src/targets/cpu/CMakeFiles/migraphx_cpu.dir/eltwise.cpp.obj -c C:/UIF/AMDMIGraphX/src/targets/cpu/eltwise.cpp
In file included from C:/UIF/AMDMIGraphX/src/targets/cpu/eltwise.cpp:25:
In file included from C:/UIF/AMDMIGraphX/src/targets/cpu/include\migraphx/cpu/pointwise.hpp:31:
In file included from C:/UIF/AMDMIGraphX/src/targets/cpu/include\migraphx/cpu/context.hpp:28:
C:/UIF/AMDMIGraphX/src/targets/cpu/include\migraphx/cpu/dnnl.hpp:106:20: error: no matching function for call to object of type 'const (lambda at C:/UIF/AMDMIGraphX/src/targets/cpu/include\migraphx/cpu/dnnl.hpp:326:40)'
            return f(ctx, args);
                   ^
C:\Program Files\Microsoft Visual Studio\2022\Professional\VC\Tools\MSVC\14.37.32822\include\type_traits:1762:16: note: in instantiation of member function 'migraphx::cpu::dnnl_op<migraphx::cpu::dnnl_eltwise, dnnl::eltwise_forward>::execute_wrapper<(lambda at C:/UIF/AMDMIGraphX/src/targets/cpu/include\migraphx/cpu/dnnl.hpp:326:40)>::operator()' requested here
        return static_cast<_Callable&&>(_Obj)(static_cast<_Ty1&&>(_Arg1), static_cast<_Types2&&>(_Args2)...);
               ^
C:\Program Files\Microsoft Visual Studio\2022\Professional\VC\Tools\MSVC\14.37.32822\include\functional:816:14: note: in instantiation of member function 'std::_Func_impl_no_alloc<migraphx::cpu::dnnl_op<migraphx::cpu::dnnl_eltwise, dnnl::eltwise_forward>::execute_wrapper<(lambda at C:/UIF/AMDMIGraphX/src/targets/cpu/include\migraphx/cpu/dnnl.hpp:326:40)>, migraphx::argument, migraphx::context &, const std::vector<migraphx::argument> &>::_Do_call' requested here
    explicit _Func_impl_no_alloc(_Other&& _Val) : _Callee(_STD forward<_Other>(_Val)) {}
             ^
C:\Program Files\Microsoft Visual Studio\2022\Professional\VC\Tools\MSVC\14.37.32822\include\xmemory:283:28: note: in instantiation of function template specialization 'std::_Func_impl_no_alloc<migraphx::cpu::dnnl_op<migraphx::cpu::dnnl_eltwise, dnnl::eltwise_forward>::execute_wrapper<(lambda at C:/UIF/AMDMIGraphX/src/targets/cpu/include\migraphx/cpu/dnnl.hpp:326:40)>, migraphx::argument, migraphx::context &, const std::vector<migraphx::argument> &>::_Func_impl_no_alloc<migraphx::cpu::dnnl_op<migraphx::cpu::dnnl_eltwise, dnnl::eltwise_forward>::execute_wrapper<(lambda at C:/UIF/AMDMIGraphX/src/targets/cpu/include\migraphx/cpu/dnnl.hpp:326:40)>, 0>' requested here
    ::new (_Guard._Result) _Ty(_STD forward<_Types>(_Args)...);
                           ^
C:\Program Files\Microsoft Visual Studio\2022\Professional\VC\Tools\MSVC\14.37.32822\include\functional:927:18: note: in instantiation of function template specialization 'std::_Global_new<std::_Func_impl_no_alloc<migraphx::cpu::dnnl_op<migraphx::cpu::dnnl_eltwise, dnnl::eltwise_forward>::execute_wrapper<(lambda at C:/UIF/AMDMIGraphX/src/targets/cpu/include\migraphx/cpu/dnnl.hpp:326:40)>, migraphx::argument, migraphx::context &, const std::vector<migraphx::argument> &>, migraphx::cpu::dnnl_op<migraphx::cpu::dnnl_eltwise, dnnl::eltwise_forward>::execute_wrapper<(lambda at C:/UIF/AMDMIGraphX/src/targets/cpu/include\migraphx/cpu/dnnl.hpp:326:40)>>' requested here
            _Set(_Global_new<_Impl>(_STD forward<_Fx>(_Val)));
                 ^
C:\Program Files\Microsoft Visual Studio\2022\Professional\VC\Tools\MSVC\14.37.32822\include\functional:1052:15: note: in instantiation of function template specialization 'std::_Func_class<migraphx::argument, migraphx::context &, const std::vector<migraphx::argument> &>::_Reset<migraphx::cpu::dnnl_op<migraphx::cpu::dnnl_eltwise, dnnl::eltwise_forward>::execute_wrapper<(lambda at C:/UIF/AMDMIGraphX/src/targets/cpu/include\migraphx/cpu/dnnl.hpp:326:40)>>' requested here
        this->_Reset(_STD forward<_Fx>(_Func));
              ^
C:\Program Files\Microsoft Visual Studio\2022\Professional\VC\Tools\MSVC\14.37.32822\include\functional:1099:9: note: (skipping 8 contexts in backtrace; use -ftemplate-backtrace-limit=0 to see all)
        function(_STD forward<_Fx>(_Func)).swap(*this);
        ^
C:/UIF/AMDMIGraphX/src/include\migraphx/operation.hpp:566:20: note: in instantiation of function template specialization 'std::make_shared<migraphx::operation::private_detail_te_handle_type<migraphx::cpu::dnnl_eltwise>, migraphx::cpu::dnnl_eltwise>' requested here
              std::make_shared<private_detail_te_handle_type<
                   ^
C:/UIF/AMDMIGraphX/src/include\migraphx/register_op.hpp:64:43: note: in instantiation of function template specialization 'migraphx::operation::operation<migraphx::cpu::dnnl_eltwise>' requested here
    static auto op_h = detail::op_handler(T{});
                                          ^
C:/UIF/AMDMIGraphX/src/include\migraphx/register_op.hpp:73:9: note: in instantiation of function template specialization 'migraphx::register_op<migraphx::cpu::dnnl_eltwise>' requested here
        register_op<T>();
        ^
C:/UIF/AMDMIGraphX/src/include\migraphx/auto_register.hpp:36:22: note: in instantiation of function template specialization 'migraphx::register_op_action::apply<migraphx::cpu::dnnl_eltwise>' requested here
    Action::template apply<T>();
                     ^
C:/UIF/AMDMIGraphX/src/include\migraphx/auto_register.hpp:56:55: note: in instantiation of function template specialization 'migraphx::auto_register_action<migraphx::register_op_action, migraphx::cpu::dnnl_eltwise>' requested here
const int auto_register<Action, T>::static_register = auto_register_action<Action, T>(); // NOLINT
                                                      ^
C:/UIF/AMDMIGraphX/src/targets/cpu/include\migraphx/cpu/dnnl.hpp:326:40: note: candidate function not viable: no known conversion from 'context' to 'context &' for 1st argument
        execute = make_execute_wrapper([=](context&, const std::vector<argument>& args) {
                                       ^
1 error generated.

@pfultz2
Copy link
Collaborator

pfultz2 commented Oct 8, 2023

So we are definatly getting close to what the problem is:

C:/UIF/AMDMIGraphX/src/targets/cpu/include\migraphx/cpu/dnnl.hpp:326:40: note: candidate function not viable: no known conversion from 'context' to 'context &' for 1st argument
        execute = make_execute_wrapper([=](context&, const std::vector<argument>& args) {
                                       ^

I am not sure why its taking context and not context &. It could be a confusion between the migraphx::context and migraphx::cpu::context. I am not sure. Perhaps a fully qualified name might be better.

Either way, we dont use this parameter so we can removed it from the lambda(but not the function object):

template<class F>
struct execute_wrapper
{
    F f;
    argument operator()(context&, const std::vector<argument>& args) const
    {
        return f(args);
    }
};
template<class F>
execute_wrapper<F> make_execute_wrapper(F f)
{
    return {std::move(f)};
}

And then create the lambda as:

execute = make_execute_wrapper([=](const std::vector<argument>& args) { ... });

@apwojcik
Copy link
Collaborator Author

apwojcik commented Oct 8, 2023

It worked. We have the final solution now. Thanks.

@apwojcik apwojcik changed the title replace lambda with functor to support Windows wrap lambda to support Windows Oct 8, 2023
@@ -91,6 +91,19 @@ struct post_op : reflect_equality<post_op>, reflect_stream<post_op>
}
};

template <class F>
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you add a comment about wrapping this lambda because an issue with MSVC and the context parameter?

@causten causten merged commit 28d577e into develop Oct 11, 2023
@causten causten deleted the windows_bind_dnnl branch October 11, 2023 03:17
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Windows Related changes for Windows Environments
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants