-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Packetpool/v3 #9808
Packetpool/v3 #9808
Conversation
Problem: In pcap autofp mode, there is one threads reading packets (RX). These packets are then passed on to worker threads. When these workers are done with a packet, they return packets to the pcap reader threads packet pool, which is the owner of the packets. Since this requires expensive synchronization between threads, there is logic in place to batch this operation. When the reader thread depletes its pool, it notifies the other threads that it is starving and that a sync needs to happen asap. Then the reader enters a wait state. During this time no new packets are read. However, there is a problem with this approach. When the reader encountered an empty pool, it would set an atomic flag that it needed a sync. The first worker to return a packet to the pool would then set this flag, sync, and unset the flag. This forced sync could result in just a single packet being synchronized, or several. So if unlucky, the reader would just get a single packet before hitting the same condition again. Solution: This patch updates the logic to use a new approach. Instead of using a binary flag approach where the behavior only changes when the reader is already starved, it uses a dynamic sync threshold that is controlled by the reader. The reader keeps a running count of packets it its pool, and calculates the percentage of available packets. This percentage is then used to set the sync threshold. When the pool is starved, it sets the threshold to 1 (sync for each packet). After each successful get/sync the threshold is adjusted.
Completes: dc40a13 ("packetpool: signal waiter within lock")
Codecov Report
Additional details and impacted files@@ Coverage Diff @@
## master #9808 +/- ##
=======================================
Coverage 82.37% 82.37%
=======================================
Files 968 968
Lines 273866 273887 +21
=======================================
+ Hits 225585 225609 +24
+ Misses 48281 48278 -3
Flags with carried forward coverage won't be shown. Click here to find out more. |
Information: QA ran without warnings. Pipeline 16604 |
Will the second commit need to be backported?
|
Probably good to do it, ya. |
Merged in #9859, thanks! |
Remaining work from #9486, rebased.