You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
snap list --all lxd core20 core22 core24 snapd microceph output:
Name Version Rev Tracking Publisher Notes
core20 20240911 2434 latest/stable canonical✓ base
core22 20241001 1663 latest/stable canonical✓ base,disabled
core22 20241119 1722 latest/stable canonical✓ base
lxd 5.21.2-2f4ba6b 30131 5.21/stable canonical✓ held
microceph 18.2.4+snapc9f2b08f92 1139 reef/stable canonical✓ held
snapd 2.63 21759 latest/stable canonical✓ snapd,disabled
snapd 2.66.1 23258 latest/stable canonical✓ snapd
Issue description
My environment is an lxd cluster configuration combined with microceph.
I launched an instance for the first time in a month and got an error that I thought was ceph-related.
lxc launch ubuntu:24.10 --vm
Creating the instance
Error: Failed instance creation: Failed creating instance from image: Failed to run: rbd --id admin --cluster ceph --image-feature layering clone lxd/image_86a133f5a92a26b8c6fe9fc0f0df2cc8bc51250ffec8ed282f54c78f0f7c220b_ext4@readonly lxd/virtual-machine_k8s-test_vast-flounder: exit status 2 (2025-01-04T01:32:37.284+0000 7e4ccbe00640 -1 librbd::image::OpenRequest: failed to find snapshot readonly
2025-01-04T01:32:37.284+0000 7e4cbe000640 -1 librbd::image::CloneRequest: 0x5e2976f11b80 handle_open_parent: failed to open parent image: (2) No such file or directory
rbd: clone error: (2) No such file or directory)
As a tentative solution, we're trying the following
rbd snap ls lxd/image_86a133f5a92a26b8c6fe9fc0f0df2cc8bc51250ffec8ed282f54c78f0f7c220b_ext4
SNAPID NAME SIZE PROTECTED TIMESTAMP
76 zombie_snapshot_4f30d075-b455-45a6-bd8f-293c5ae935d0 100 MiB yes Sun Dec 22 08:27:07 2024
rbd children lxd/image_86a133f5a92a26b8c6fe9fc0f0df2cc8bc51250ffec8ed282f54c78f0f7c220b_ext4@zombie_snapshot_4f30d075-b455-45a6-bd8f-293c5ae935d0
lxd/virtual-machine_k8s-test2_measured-wahoo
lxc delete measured-wahoo
Error: Failed deleting instance "measured-wahoo" in project "k8s-test2": Error deleting storage volume: Failed to delete volume: Failed to run: rbd --id admin --cluster ceph --pool lxd children --image image_86a133f5a92a26b8c6fe9fc0f0df2cc8bc51250ffec8ed282f54c78f0f7c220b.block --snap zombie_snapshot_cdfec6d0-8c1d-470e-863e-46587f55d897: exit status 2 (rbd: error opening image image_86a133f5a92a26b8c6fe9fc0f0df2cc8bc51250ffec8ed282f54c78f0f7c220b.block: (2) No such file or directory)
I tried twice and the instance disappears without error.
4. rbd snap unprotect lxd/image_86a133f5a92a26b8c6fe9fc0f0df2cc8bc51250ffec8ed282f54c78f0f7c220b_ext4@zombie_snapshot_4f30d075-b455-45a6-bd8f-293c5ae935d0
2025-01-04T01:57:42.255+0000 780b05000640 -1 librbd::SnapshotUnprotectRequest: cannot unprotect: at least 1 child(ren) [5953f5127e353] in pool 'lxd'
2025-01-04T01:57:42.257+0000 780b05a00640 -1 librbd::SnapshotUnprotectRequest: encountered error: (16) Device or resource busy
2025-01-04T01:57:42.257+0000 780b05a00640 -1 librbd::SnapshotUnprotectRequest: 0x5eb87b710790 should_complete_error: ret_val=-16
2025-01-04T01:57:42.265+0000 780b05000640 -1 librbd::SnapshotUnprotectRequest: 0x5eb87b710790 should_complete_error: ret_val=-16
rbd: unprotecting snap failed: (16) Device or resource busy
snap list --all lxd core20 core22 core24 snapd microceph
output:Issue description
My environment is an lxd cluster configuration combined with microceph.
I launched an instance for the first time in a month and got an error that I thought was ceph-related.
When I look at https://discuss.linuxcontainers.org/t/lxd-3-21-more-ceph-issues/6868 I see the same phenomenon, but it seems to say that it has already been resolved.
The method of reproduction has not been verified.
But my sense is that it is happening after deleting the instance.
Information to attach
dmesg
)I ruled out what was clearly another error.
lxc info NAME --show-log
)lxc config show NAME --expanded
)lxc monitor
while reproducing the issue)The text was updated successfully, but these errors were encountered: