-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
next/199/20231121/v1 #9859
Merged
Merged
next/199/20231121/v1 #9859
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This commit modifies calls to SCCalloc that had a member count of 1 and a size count calculated as: element_count * sizeof(element).
Problem: In pcap autofp mode, there is one threads reading packets (RX). These packets are then passed on to worker threads. When these workers are done with a packet, they return packets to the pcap reader threads packet pool, which is the owner of the packets. Since this requires expensive synchronization between threads, there is logic in place to batch this operation. When the reader thread depletes its pool, it notifies the other threads that it is starving and that a sync needs to happen asap. Then the reader enters a wait state. During this time no new packets are read. However, there is a problem with this approach. When the reader encountered an empty pool, it would set an atomic flag that it needed a sync. The first worker to return a packet to the pool would then set this flag, sync, and unset the flag. This forced sync could result in just a single packet being synchronized, or several. So if unlucky, the reader would just get a single packet before hitting the same condition again. Solution: This patch updates the logic to use a new approach. Instead of using a binary flag approach where the behavior only changes when the reader is already starved, it uses a dynamic sync threshold that is controlled by the reader. The reader keeps a running count of packets it its pool, and calculates the percentage of available packets. This percentage is then used to set the sync threshold. When the pool is starved, it sets the threshold to 1 (sync for each packet). After each successful get/sync the threshold is adjusted.
Completes: dc40a13 ("packetpool: signal waiter within lock")
Codecov Report
Additional details and impacted files@@ Coverage Diff @@
## master #9859 +/- ##
==========================================
- Coverage 82.42% 82.41% -0.02%
==========================================
Files 973 973
Lines 273261 273063 -198
==========================================
- Hits 225233 225036 -197
+ Misses 48028 48027 -1
Flags with carried forward coverage won't be shown. Click here to find out more. |
Information: QA ran without warnings. Pipeline 16677 |
lukashino
approved these changes
Nov 22, 2023
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
This was referenced Nov 22, 2023
Closed
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Staging: