forked from levex/cgroups-rs
-
Notifications
You must be signed in to change notification settings - Fork 48
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory controller panics in some situations #138
Labels
Comments
This PR looks like it would have fixed things, but stalled out. |
alexman-stripe
added a commit
to alexman-stripe/cgroups-rs
that referenced
this issue
Sep 10, 2024
When containers are terminated, cgroup v2 memory metrics under /sys/fs/cgroup may disappear. Previously, kata-agent assumed these metrics always exist, leading to panics as reported in kata-containers#138. This commit returns default value (0) when memory metric files are missing. This behaviour aligns with cgroup v1, which also defaults to 0 memory metric files are missing: - Memory.limit_in_bytes which maps to m.max https://github.com/kata-containers/cgroups-rs/blob/main/src/memory.rs#L635 - Memory.soft_limit_in_bytes which maps to m.low https://github.com/kata-containers/cgroups-rs/blob/main/src/memory.rs#L661 - MemSwap.fail_cnt: https://github.com/kata-containers/cgroups-rs/blob/main/src/memory.rs#L631 Submitting a new PR because kata-containers#116 does not handle MemSwap.fail_cnt, which will also cause panic.
alexman-stripe
added a commit
to alexman-stripe/cgroups-rs
that referenced
this issue
Sep 10, 2024
When containers are terminated, cgroup v2 memory metrics under /sys/fs/cgroup may disappear. Previously, kata-agent assumed these metrics always exist, leading to panics as reported in kata-containers#138. This commit returns default value (0) when memory metric files are missing. This behaviour aligns with cgroup v1, which also defaults to 0 memory metric files are missing: - Memory.limit_in_bytes which maps to m.max https://github.com/kata-containers/cgroups-rs/blob/main/src/memory.rs#L635 - Memory.soft_limit_in_bytes which maps to m.low https://github.com/kata-containers/cgroups-rs/blob/main/src/memory.rs#L661 - MemSwap.fail_cnt: https://github.com/kata-containers/cgroups-rs/blob/main/src/memory.rs#L631 Submitting a new PR because kata-containers#116 does not handle MemSwap.fail_cnt, which will also cause panic.
alexman-stripe
added a commit
to alexman-stripe/cgroups-rs
that referenced
this issue
Sep 11, 2024
When containers are terminated, cgroup v2 memory metrics under /sys/fs/cgroup may disappear. Previously, kata-agent assumed these metrics always exist, leading to panics as reported in kata-containers#138. This commit returns default value (0) when memory metric files are missing. This behaviour aligns with cgroup v1, which also defaults to 0 memory metric files are missing: - Memory.limit_in_bytes which maps to m.max https://github.com/kata-containers/cgroups-rs/blob/main/src/memory.rs#L635 - Memory.soft_limit_in_bytes which maps to m.low https://github.com/kata-containers/cgroups-rs/blob/main/src/memory.rs#L661 - MemSwap.fail_cnt: https://github.com/kata-containers/cgroups-rs/blob/main/src/memory.rs#L631 Submitting a new PR because kata-containers#116 does not handle MemSwap.fail_cnt, which will also cause panic. Signed-off-by: Alex Man <[email protected]>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Describe the bug
In the lading project we're using cgroups-rs 0.3.4 to read memory and cpu controller data for processes in containers, managed with docker. We have discovered that unless
--cgroupns host
is set on the lading container lading will crash with a panic at this line. The problem does away if the host namespace is set, but it is unclear if the expected behavior is a panic or not from this crate.Expected behavior
No panic, or a library function to determine if reads from the memory controller are valid or not prior to making a potentially panic'ing call.
Additional information
A backtrace from our project:
The text was updated successfully, but these errors were encountered: