Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ggml: fix build break for the vulkan-debug. #9265

Merged
merged 1 commit into from
Sep 6, 2024

Conversation

cyzero-kim
Copy link
Contributor

@cyzero-kim cyzero-kim commented Sep 1, 2024

  • windows build : Ok.

  • linux build : Ok.

  • I have read the contributing guidelines

  • Self-reported review complexity:

    • Low
    • Medium
    • High

This is a minor patch that fixes an error occurring in the Vulkan debug build.

Case : cmake .. -DGGML_VULKAN:BOOL=ON -DGGML_VULKAN_DEBUG:BOOL=ON

[ Windows ]
image

[ Linux ]

[  9%] Building C object ggml/src/CMakeFiles/ggml.dir/ggml-aarch64.c.o
/mnt/hdd_500gb/work/llama/llama.cpp/ggml/src/ggml-vulkan.cpp: In function ‘void ggml_vk_dispatch_pipeline(ggml_backend_vk_context*, vk_context&, vk_pipeline&, const std::initializer_list<vk::DescriptorBufferInfo>&, size_t, const void*, std::array<unsigned int, 3>)’:
/mnt/hdd_500gb/work/llama/llama.cpp/ggml/src/ggml-vulkan.cpp:2483:26: error: no match for ‘operator<<’ (operand types are ‘std::basic_ostream<char>’ and ‘const vk::DescriptorBufferInfo’)
 2483 |         std::cerr << "(" << buffer << ", " << buffer.offset << ", " << buffer.size << "), ";
      |         ~~~~~~~~~~~~~~~~ ^~ ~~~~~~
      |                   |         |
      |                   |         const vk::DescriptorBufferInfo
      |                   std::basic_ostream<char>
/mnt/hdd_500gb/work/llama/llama.cpp/ggml/src/ggml-vulkan.cpp:61:40: note: in definition of macro ‘VK_LOG_DEBUG’
   61 | #define VK_LOG_DEBUG(msg) std::cerr << msg << std::endl
      |                                        ^~~
In file included from /usr/include/c++/11/istream:39,
                 from /usr/include/c++/11/sstream:38,
                 from /usr/include/vulkan/vulkan_to_string.hpp:16,
                 from /usr/include/vulkan/vulkan.hpp:6110,
                 from /mnt/hdd_500gb/work/llama/llama.cpp/ggml/src/ggml-vulkan.cpp:7:

...

/mnt/hdd_500gb/work/llama/llama.cpp/ggml/src/ggml-vulkan.cpp:2483:79: error: ‘const struct vk::DescriptorBufferInfo’ has no member named ‘size’
 2483 |         std::cerr << "(" << buffer << ", " << buffer.offset << ", " << buffer.size << "), ";
      |                                                                               ^~~~
/mnt/hdd_500gb/work/llama/llama.cpp/ggml/src/ggml-vulkan.cpp:61:40: note: in definition of macro ‘VK_LOG_DEBUG’
   61 | #define VK_LOG_DEBUG(msg) std::cerr << msg << std::endl

[ VulkanSDK 1.3.283.0 ]
C:\VulkanSDK\1.3.283.0\Include\vulkan\vulkan_structs.hpp

  struct DescriptorBufferInfo
  {
    using NativeType = VkDescriptorBufferInfo;

#if !defined( VULKAN_HPP_NO_STRUCT_CONSTRUCTORS )
    VULKAN_HPP_CONSTEXPR DescriptorBufferInfo( VULKAN_HPP_NAMESPACE::Buffer     buffer_ = {},
                                               VULKAN_HPP_NAMESPACE::DeviceSize offset_ = {},
                                               VULKAN_HPP_NAMESPACE::DeviceSize range_  = {} ) VULKAN_HPP_NOEXCEPT
      : buffer( buffer_ )
      , offset( offset_ )
      , range( range_ )
    {
    }
...

[ logs ]

ggml_vk_dispatch_pipeline(mul_mat_vec_q6_k_f32_f32, {(0000027EF0332248, 1889308416, 48168960), (0000027EF11B37C8, 86144, 57344), (0000027EF11B37C8, 0,
 16384), }, (4096,1,1))
ggml_vk_build_graph(0000027EE5273270, ADD)
ggml_vk_op_f32((0000027EE5273100, name=ffn_out-31, type=0, ne0=4096, ne1=1, ne2=1, ne3=1, nb0=4, nb1=16384, nb2=16384, nb3=16384), (0000027EE52726F0,
name=ffn_inp-31, type=0, ne0=4096, ne1=1, ne2=1, ne3=1, nb0=4, nb1=16384, nb2=16384, nb3=16384), (0000027EE5273270, name=l_out-31, type=0, ne0=4096, n
e1=1, ne2=1, ne3=1, nb0=4, nb1=16384, nb2=16384, nb3=16384), ADD, )
ggml_vk_sync_buffers()
ggml_vk_dispatch_pipeline(add_f32, {(0000027EF11B37C8, 0, 16384), (0000027EF11B37C8, 69760, 16384), (0000027EF11B37C8, 0, 16384), }, (1,8,1))
ggml_vk_build_graph(0000027EE52733E0, RMS_NORM)
ggml_vk_op_f32((0000027EE5273270, name=l_out-31, type=0, ne0=4096, ne1=1, ne2=1, ne3=1, nb0=4, nb1=16384, nb2=16384, nb3=16384), (0000027EE52733E0, na
me=norm, type=0, ne0=4096, ne1=1, ne2=1, ne3=1, nb0=4, nb1=16384, nb2=16384, nb3=16384), RMS_NORM, )
ggml_vk_sync_buffers()
ggml_vk_dispatch_pipeline(rms_norm_f32, {(0000027EF11B37C8, 0, 16384), (0000027EF11B37C8, 0, 16384), }, (1,1,1))

- windows build : Ok.
- linux build : Ok.

Signed-off-by: Changyeon Kim <[email protected]>
@github-actions github-actions bot added Vulkan Issues specific to the Vulkan backend ggml changes relating to the ggml tensor library for machine learning labels Sep 1, 2024
@ggerganov ggerganov merged commit 409dc4f into ggerganov:master Sep 6, 2024
52 checks passed
@slaren
Copy link
Collaborator

slaren commented Sep 6, 2024

The same fix was merged in ggerganov/ggml#948, so there may be some conflicts when synchronizing the repositories.

@cyzero-kim cyzero-kim deleted the vulkan_debug branch September 6, 2024 22:15
dsx1986 pushed a commit to dsx1986/llama.cpp that referenced this pull request Oct 29, 2024
- windows build : Ok.
- linux build : Ok.

Signed-off-by: Changyeon Kim <[email protected]>
arthw pushed a commit to arthw/llama.cpp that referenced this pull request Nov 15, 2024
- windows build : Ok.
- linux build : Ok.

Signed-off-by: Changyeon Kim <[email protected]>
arthw pushed a commit to arthw/llama.cpp that referenced this pull request Nov 18, 2024
- windows build : Ok.
- linux build : Ok.

Signed-off-by: Changyeon Kim <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ggml changes relating to the ggml tensor library for machine learning Vulkan Issues specific to the Vulkan backend
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants