Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Packetpool/v3 #9808

Closed
wants to merge 2 commits into from
Closed

Packetpool/v3 #9808

wants to merge 2 commits into from

Conversation

victorjulien
Copy link
Member

@victorjulien victorjulien commented Nov 16, 2023

Remaining work from #9486, rebased.

Problem:

In pcap autofp mode, there is one threads reading packets (RX). These packets
are then passed on to worker threads. When these workers are done with a
packet, they return packets to the pcap reader threads packet pool, which is
the owner of the packets. Since this requires expensive synchronization between
threads, there is logic in place to batch this operation.

When the reader thread depletes its pool, it notifies the other threads that
it is starving and that a sync needs to happen asap. Then the reader enters
a wait state. During this time no new packets are read.

However, there is a problem with this approach. When the reader encountered
an empty pool, it would set an atomic flag that it needed a sync. The first
worker to return a packet to the pool would then set this flag, sync, and
unset the flag. This forced sync could result in just a single packet being
synchronized, or several. So if unlucky, the reader would just get a single
packet before hitting the same condition again.

Solution:

This patch updates the logic to use a new approach. Instead of using a
binary flag approach where the behavior only changes when the reader is
already starved, it uses a dynamic sync threshold that is controlled by
the reader. The reader keeps a running count of packets it its pool,
and calculates the percentage of available packets. This percentage is
then used to set the sync threshold.

When the pool is starved, it sets the threshold to 1 (sync for each packet).
After each successful get/sync the threshold is adjusted.
Completes: dc40a13 ("packetpool: signal waiter within lock")
@victorjulien victorjulien mentioned this pull request Nov 16, 2023
Copy link

codecov bot commented Nov 16, 2023

Codecov Report

Merging #9808 (c192217) into master (2f4027c) will increase coverage by 0.00%.
The diff coverage is 93.10%.

Additional details and impacted files
@@           Coverage Diff           @@
##           master    #9808   +/-   ##
=======================================
  Coverage   82.37%   82.37%           
=======================================
  Files         968      968           
  Lines      273866   273887   +21     
=======================================
+ Hits       225585   225609   +24     
+ Misses      48281    48278    -3     
Flag Coverage Δ
fuzzcorpus 64.21% <44.82%> (+<0.01%) ⬆️
suricata-verify 61.01% <93.10%> (+0.03%) ⬆️
unittests 62.93% <51.72%> (-0.01%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

@victorjulien victorjulien added this to the 8.0 milestone Nov 17, 2023
@suricata-qa
Copy link

Information: QA ran without warnings.

Pipeline 16604

@jasonish
Copy link
Member

Will the second commit need to be backported?

Completes: dc40a13 ("packetpool: signal waiter within lock")

@victorjulien
Copy link
Member Author

Will the second commit need to be backported?

Completes: dc40a13 ("packetpool: signal waiter within lock")

Probably good to do it, ya.

@victorjulien
Copy link
Member Author

Merged in #9859, thanks!

@victorjulien victorjulien deleted the packetpool/v3 branch June 23, 2024 05:57
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

3 participants