-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Long k8s-file logs are truncated on 'podman logs --follow' output of stopped container #21914
Comments
A friendly reminder that this issue had no activity for 30 days. |
I confirm this observation. Observed on podman 5.1.1 running on Arch Linux x86_64. |
@UniversalSuperBox Thanks for the script, I managed to hit it after 1000 iterations. Looking at our code I cannot see anything in particular wrong but we use github.com/nxadm/tail for the tailing functionality and there is nxadm/tail#67 reported so I assume this is a bug over there. |
Actually running more tests it seem to be releated to nxadm/tail#37 I think |
To a lot longer than it should have been but nxadm/tail#71 should fix the issue. |
Hopefully nxadm/tail is still active, last commit was 8 months ago. |
Is it time yet to fork nxadm/tail? Unreliable |
Issue Description
When a container using the
k8s-file
log driver outputs a lot of logs, retrieving them withpodman logs --follow
is unreliable. If you're lucky, the entire log file is output successfully. If you're not lucky, the log is truncated at some unspecified place depending on the speed of your hardware.Retrieving logs without
--follow
is unaffected. Retrieving logs from a running container is unaffected.Steps to reproduce the issue
Steps to reproduce the issue
podman logs --follow
on the containerI've provided a reproducer at https://gist.github.com/UniversalSuperBox/031f34ecb51c5bd04cb559542d9ba519. You can run
1test.sh
and have it runpodman logs
until it fails, or just runpodman logs --follow --latest | tail -n 1
:Describe the results you received
The length of the log differs on every run of
podman logs --follow
Describe the results you expected
The entire log is output.
podman info output
Podman in a container
No
Privileged Or Rootless
None
Upstream Latest Release
Yes
Additional environment details
No response
Additional information
The issue occurs regardless of rootfulness.
This issue occurs whenever the log size gets above a certain point, that point seems dependent on hardware. Seems to be a race condition where the stop point is based on a certain time spent in the log reader, not an amount of data output. The issue occurs over the API, too, so you can use curl with the
--limit-rate
command to test that. When the rate is smaller, the amount of text downloaded is also smaller.Command for flavor, overwrites
binary-output2.bin
in the current directory:The text was updated successfully, but these errors were encountered: