Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

next/199/20231121/v1 #9859

Merged
merged 4 commits into from
Nov 22, 2023
Merged

Conversation

victorjulien
Copy link
Member

jlucovsky and others added 4 commits November 21, 2023 10:22
This commit modifies calls to SCCalloc that had a member count of 1 and
a size count calculated as: element_count * sizeof(element).
Problem:

In pcap autofp mode, there is one threads reading packets (RX). These packets
are then passed on to worker threads. When these workers are done with a
packet, they return packets to the pcap reader threads packet pool, which is
the owner of the packets. Since this requires expensive synchronization between
threads, there is logic in place to batch this operation.

When the reader thread depletes its pool, it notifies the other threads that
it is starving and that a sync needs to happen asap. Then the reader enters
a wait state. During this time no new packets are read.

However, there is a problem with this approach. When the reader encountered
an empty pool, it would set an atomic flag that it needed a sync. The first
worker to return a packet to the pool would then set this flag, sync, and
unset the flag. This forced sync could result in just a single packet being
synchronized, or several. So if unlucky, the reader would just get a single
packet before hitting the same condition again.

Solution:

This patch updates the logic to use a new approach. Instead of using a
binary flag approach where the behavior only changes when the reader is
already starved, it uses a dynamic sync threshold that is controlled by
the reader. The reader keeps a running count of packets it its pool,
and calculates the percentage of available packets. This percentage is
then used to set the sync threshold.

When the pool is starved, it sets the threshold to 1 (sync for each packet).
After each successful get/sync the threshold is adjusted.
Completes: dc40a13 ("packetpool: signal waiter within lock")
Copy link

codecov bot commented Nov 21, 2023

Codecov Report

Merging #9859 (41c0526) into master (c272a64) will decrease coverage by 0.02%.
The diff coverage is 93.02%.

Additional details and impacted files
@@            Coverage Diff             @@
##           master    #9859      +/-   ##
==========================================
- Coverage   82.42%   82.41%   -0.02%     
==========================================
  Files         973      973              
  Lines      273261   273063     -198     
==========================================
- Hits       225233   225036     -197     
+ Misses      48028    48027       -1     
Flag Coverage Δ
fuzzcorpus 64.24% <64.55%> (-0.01%) ⬇️
suricata-verify 61.07% <77.21%> (-0.02%) ⬇️
unittests 62.92% <64.34%> (-0.01%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

@suricata-qa
Copy link

Information: QA ran without warnings.

Pipeline 16677

Copy link
Contributor

@lukashino lukashino left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@victorjulien victorjulien merged commit 41c0526 into OISF:master Nov 22, 2023
45 checks passed
@victorjulien victorjulien deleted the next/199/20231121/v1 branch June 23, 2024 05:22
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

4 participants