You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, im trying to tune my nvmeof gw for better performance and trying to increase amount of reactor cores. (using -m [0,1,2,3,4,5,6,7,8,9] for example). My cluster can perform quite well but I get stuck at some 4-5GB/sec on large seq writes (with 8 cores).
As I go over 8 cores I am met with this error:
[2024-09-22 12:58:29.086343] transport.c: 486:nvmf_transport_poll_group_create: *NOTICE*: Unable to reserve the full number of buffers for the pg buffer cache. Decrease the number of cached buffers from 32 to 223
nvmf_tgt: transport.c:490: nvmf_transport_poll_group_create: Assertion `tgroup->buf_cache_size <= transport->opts.buf_cache_size' failed.
ERROR:nvmeof:Create Transport tcp returned with error:
Connection closed with partial response:
ERROR:nvmeof:GatewayServer: SIGCHLD received signum=17
Gateway subprocess terminated pid=23 exit_code=-6
I found these options in this repo and I went through some docs and code in the spdk and found a little more info on some tcp transport options that could are also mentioned in code here:
I solved the issue by tuning "num_shared_buffers" to 4096. I though this can be helpful for anyone out there trying to do the same..
Also, any other form of suggestions to increase performance would be helpful. I have gateways capable of up to 64 cores and 512GB of ram. My goal is to at least saturate a 100Gbit link, two of those in LACP if possible. Could also upgrade to 2x200Gbit.
The text was updated successfully, but these errors were encountered:
Hi, im trying to tune my nvmeof gw for better performance and trying to increase amount of reactor cores. (using -m [0,1,2,3,4,5,6,7,8,9] for example). My cluster can perform quite well but I get stuck at some 4-5GB/sec on large seq writes (with 8 cores).
As I go over 8 cores I am met with this error:
I found these options in this repo and I went through some docs and code in the spdk and found a little more info on some tcp transport options that could are also mentioned in code here:
I solved the issue by tuning "num_shared_buffers" to 4096. I though this can be helpful for anyone out there trying to do the same..
Also, any other form of suggestions to increase performance would be helpful. I have gateways capable of up to 64 cores and 512GB of ram. My goal is to at least saturate a 100Gbit link, two of those in LACP if possible. Could also upgrade to 2x200Gbit.
The text was updated successfully, but these errors were encountered: