-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pipeline drops half of the INT reports #104
Comments
ccascone
changed the title
Pipeline drops every other INT report
Pipeline drops half of the INT reports
Oct 6, 2020
I also find the same issue on leaf2(new 32D switch) of the staging server when Charles tested the topology |
Confirming that this issue still exists. I observed it happening on staging when using Attached Archive.zip:
|
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
We observed this issue in the production pod, the cause is still unclear.
This is especially evident when monitoring high bandwidth (~10Gbps) TCP flows generated by iperf: DeepInsight shows a rate of dropped reports that is proportional, and in most cases the same, to that of successfully processed reports:
DeepInsight uses the
seq_no
field in the INT report fixed header to detect dropped reports. In an iperf test, the INT reports delivered to the server have missingseq_no
s. From this pcap trace, we see that reports haveseq_no
We don't believe it's an issue with
seq_no
computation in tofino, as when generating low bit rate traffic, the issue cannot be reproduced.Instead, we believe this is connected to how we use mirroring sessions and/or recirculation ports, and the fact that the port attached to the DI server is a 10G. The issue does not manifest when running a similar test in the staging server, where the DI port is 40G.On 03/30/2020 we have observed the same issue on the staging pod which uses 40G interfaces for the collector.The text was updated successfully, but these errors were encountered: