Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docker ZFS driver creates hundreds of datasets and doesn’t clean them #41055

Open
didrocks opened this issue Jun 2, 2020 · 45 comments
Open

Comments

@didrocks
Copy link

didrocks commented Jun 2, 2020

Description

We started to receive a lot of bug reports against ZSys )like ubuntu/zsys#102 and ubuntu/zsys#112) because the number of datasets created by docker.io goes quickly out of control as people don’t remove stopped containers (via docker rm)

Steps to reproduce the issue:

  1. Have a system with ZFS on root installed
  2. Run any container and get it stopped

Describe the results you received:

zfs list -> the dataset associated to this stopped container are still there. After a few days, the list grows out of control:

rpool/ROOT/ubuntu_093s22/var/lib/8551e44f95bf75de129e5e9844e6eba4fe5a8ccd1033ea81f2b64ebee600c303 /var/lib/docker/zfs/graph/8551e44f95bf75de129e5e9844e6eba4fe5a8ccd1033ea81f2b64ebee600c303 zfs rw,relatime,xattr,posixacl 0 0
rpool/ROOT/ubuntu_093s22/var/lib/e2d41b06d9fd4bdf8a8fb633c78db976f28f3ca293b5e1637be463e563513145 /var/lib/docker/zfs/graph/e2d41b06d9fd4bdf8a8fb633c78db976f28f3ca293b5e1637be463e563513145 zfs rw,relatime,xattr,posixacl 0 0
rpool/ROOT/ubuntu_093s22/var/lib/9ef38a00e3f67b918a08b2af82bb19ef0d944e9044cabc35a00999c39cce0c15 /var/lib/docker/zfs/graph/9ef38a00e3f67b918a08b2af82bb19ef0d944e9044cabc35a00999c39cce0c15 zfs rw,relatime,xattr,posixacl 0 0
rpool/ROOT/ubuntu_093s22/var/lib/a255759e2328c54099cb22ac13cde662559bba6fa5b710045480ba423228a22d /var/lib/docker/zfs/graph/a255759e2328c54099cb22ac13cde662559bba6fa5b710045480ba423228a22d zfs rw,relatime,xattr,posixacl 0 0
rpool/ROOT/ubuntu_093s22/var/lib/3f7fb34b9e064d3f2c870c8aec32a906fa2dd903f380113417f9110ac17feebb /var/lib/docker/zfs/graph/3f7fb34b9e064d3f2c870c8aec32a906fa2dd903f380113417f9110ac17feebb zfs rw,relatime,xattr,posixacl 0 0
rpool/ROOT/ubuntu_093s22/var/lib/ad1465a4a6b68e701a636561837c5b1c374f9bd3eed32fa6785d4bd8610c2707 /var/lib/docker/zfs/graph/ad1465a4a6b68e701a636561837c5b1c374f9bd3eed32fa6785d4bd8610c2707 zfs rw,relatime,xattr,posixacl 0 0
rpool/ROOT/ubuntu_093s22/var/lib/f738a3c41567893c1693a9c907dd963ff28cf6d2f3c42a7a8064876c7fd25039 /var/lib/docker/zfs/graph/f738a3c41567893c1693a9c907dd963ff28cf6d2f3c42a7a8064876c7fd25039 zfs rw,relatime,xattr,posixacl 0 0
rpool/ROOT/ubuntu_093s22/var/lib/91b0c9a1c8fcc0bebcc7bc848f4374482c73b7ff0e84a91519e3bea0971ee995 /var/lib/docker/zfs/graph/91b0c9a1c8fcc0bebcc7bc848f4374482c73b7ff0e84a91519e3bea0971ee995 zfs rw,relatime,xattr,posixacl 0 0
rpool/ROOT/ubuntu_093s22/var/lib/2766fdb99c85b5e63efe1e54c2449a7f0802da2c6dda81d91833dcc620b03368 /var/lib/docker/zfs/graph/2766fdb99c85b5e63efe1e54c2449a7f0802da2c6dda81d91833dcc620b03368 zfs rw,relatime,xattr,posixacl 0 0
rpool/ROOT/ubuntu_093s22/var/lib/0352f94bb21aa0d09af2bc32ed130d1ce8e41281932d1bdc636492998a2591a4 /var/lib/docker/zfs/graph/0352f94bb21aa0d09af2bc32ed130d1ce8e41281932d1bdc636492998a2591a4 zfs rw,relatime,xattr,posixacl 0 0
rpool/ROOT/ubuntu_093s22/var/lib/617cfbdbba0c507c5040ed0f155180cab46805cbe0b03b1b5008bc3a892795ca /var/lib/docker/zfs/graph/617cfbdbba0c507c5040ed0f155180cab46805cbe0b03b1b5008bc3a892795ca zfs rw,relatime,xattr,posixacl 0 0
rpool/ROOT/ubuntu_093s22/var/lib/f894bb4b0ef78ac2f7e38a48bbbc9a3ade2ff3138bc20c6d82083b0734b9407e /var/lib/docker/zfs/graph/f894bb4b0ef78ac2f7e38a48bbbc9a3ade2ff3138bc20c6d82083b0734b9407e zfs rw,relatime,xattr,posixacl 0 0
rpool/ROOT/ubuntu_093s22/var/lib/339ffaab8fadb44ce2f3c7d746bf1b1b86368ef1bae89db0d03e7b99e1794b47 /var/lib/docker/zfs/graph/339ffaab8fadb44ce2f3c7d746bf1b1b86368ef1bae89db0d03e7b99e1794b47 zfs rw,relatime,xattr,posixacl 0 0
 nsfs /run/docker/netns/lb_9f49r975o nsfs rw 0 0
rpool/ROOT/ubuntu_093s22/var/lib/c2a6cf5b2060bc8e0c7a19cbd0b12d19a1c5f0484440742675945614ec40ee6e /var/lib/docker/zfs/graph/c2a6cf5b2060bc8e0c7a19cbd0b12d19a1c5f0484440742675945614ec40ee6e zfs rw,relatime,xattr,posixacl 0 0
rpool/ROOT/ubuntu_093s22/var/lib/6b7083d88efac59883c9cbc6ddd3faa98bcceec95e193a50e43889aa78f5b7e8 /var/lib/docker/zfs/graph/6b7083d88efac59883c9cbc6ddd3faa98bcceec95e193a50e43889aa78f5b7e8 zfs rw,relatime,xattr,posixacl 0 0
rpool/ROOT/ubuntu_093s22/var/lib/7b4419e4a9c84dc597ed00f37dfd58fa3081c6ea6075c1425bb47cb332c3dd9d /var/lib/docker/zfs/graph/7b4419e4a9c84dc597ed00f37dfd58fa3081c6ea6075c1425bb47cb332c3dd9d zfs rw,relatime,xattr,posixacl 0 0
rpool/ROOT/ubuntu_093s22/var/lib/acac1a2f9dd1bb2858ea9ff7b3bd6e9c232fc6a140ff9787fb1c540cb138d71b /var/lib/docker/zfs/graph/acac1a2f9dd1bb2858ea9ff7b3bd6e9c232fc6a140ff9787fb1c540cb138d71b zfs rw,relatime,xattr,posixacl 0 0
rpool/ROOT/ubuntu_093s22/var/lib/e64c60a49787165955db0f4b75772f7e9d649d48a66abcb65bdee11e762d0b94 /var/lib/docker/zfs/graph/e64c60a49787165955db0f4b75772f7e9d649d48a66abcb65bdee11e762d0b94 zfs rw,relatime,xattr,posixacl 0 0
rpool/ROOT/ubuntu_093s22/var/lib/f1a350051fc8c0a28e57c06e4211cda357ded0ca1aabdf5810c08f88c0930671 /var/lib/docker/zfs/graph/f1a350051fc8c0a28e57c06e4211cda357ded0ca1aabdf5810c08f88c0930671 zfs rw,relatime,xattr,posixacl 0 0
rpool/ROOT/ubuntu_093s22/var/lib/3ea389254b4c8719b45b3082c13dc593e65f4c83e3ed3a09ae8ee680628bc99a /var/lib/docker/zfs/graph/3ea389254b4c8719b45b3082c13dc593e65f4c83e3ed3a09ae8ee680628bc99a zfs rw,relatime,xattr,posixacl 0 0
rpool/ROOT/ubuntu_093s22/var/lib/d110548bbfdd5a207d552dce27a7a0ad4968dc7082392c6d1eeb2820442f631f /var/lib/docker/zfs/graph/d110548bbfdd5a207d552dce27a7a0ad4968dc7082392c6d1eeb2820442f631f zfs rw,relatime,xattr,posixacl 0 0
rpool/ROOT/ubuntu_093s22/var/lib/276bb14f34cdbcc1de232205c768e463ac28f059ef116848d28437a8b2dc78e0 /var/lib/docker/zfs/graph/276bb14f34cdbcc1de232205c768e463ac28f059ef116848d28437a8b2dc78e0 zfs rw,relatime,xattr,posixacl 0 0
rpool/ROOT/ubuntu_093s22/var/lib/e7c1695c19ec32a919de338b75d62fc81d403d4335f031ae67263b0993b3c7df /var/lib/docker/zfs/graph/e7c1695c19ec32a919de338b75d62fc81d403d4335f031ae67263b0993b3c7df zfs rw,relatime,xattr,posixacl 0 0
rpool/ROOT/ubuntu_093s22/var/lib/f188bf6ed3eae435f2d47ffeab45bf2bb750585ab328cd2e0b889f4dc2b4b5de /var/lib/docker/zfs/graph/f188bf6ed3eae435f2d47ffeab45bf2bb750585ab328cd2e0b889f4dc2b4b5de zfs rw,relatime,xattr,posixacl 0 0
rpool/ROOT/ubuntu_093s22/var/lib/92394e1325e36f73937d46fdbabb617fd75665ee72d4ec91af6db228a7284f7d /var/lib/docker/zfs/graph/92394e1325e36f73937d46fdbabb617fd75665ee72d4ec91af6db228a7284f7d zfs rw,relatime,xattr,posixacl 0 0
rpool/ROOT/ubuntu_093s22/var/lib/df909c219c81a76b913339c9317b0373389be5a7d21476c95de2c897a4b65266 /var/lib/docker/zfs/graph/df909c219c81a76b913339c9317b0373389be5a7d21476c95de2c897a4b65266 zfs rw,relatime,xattr,posixacl 0 0
rpool/ROOT/ubuntu_093s22/var/lib/05aa29f58d88631dfae7bea207be57b966d5a0b0818789223891f9207de56ccc /var/lib/docker/zfs/graph/05aa29f58d88631dfae7bea207be57b966d5a0b0818789223891f9207de56ccc zfs rw,relatime,xattr,posixacl 0 0
rpool/ROOT/ubuntu_093s22/var/lib/97e7d6b85020fd4d5809812f904b98400dd4e6949aafc4538bbc1731bb43c32e /var/lib/docker/zfs/graph/97e7d6b85020fd4d5809812f904b98400dd4e6949aafc4538bbc1731bb43c32e zfs rw,relatime,xattr,posixacl 0 0
rpool/ROOT/ubuntu_093s22/var/lib/807b88fa7898e1c834e72e9d2aae2f5e3b60fe1689073c463a5456b5c3681d47 /var/lib/docker/zfs/graph/807b88fa7898e1c834e72e9d2aae2f5e3b60fe1689073c463a5456b5c3681d47 zfs rw,relatime,xattr,posixacl 0 0
rpool/ROOT/ubuntu_093s22/var/lib/f1bdb8e60b558899c7f30b48f9e14a3f9e05c89e2f13ddbc92a9bf4af89f747a /var/lib/docker/zfs/graph/f1bdb8e60b558899c7f30b48f9e14a3f9e05c89e2f13ddbc92a9bf4af89f747a zfs rw,relatime,xattr,posixacl 0 0
rpool/ROOT/ubuntu_093s22/var/lib/2b845fff6676984d3e9be4922490d4c106ea01a4f43dc38a7923edeeafb1e4f8 /var/lib/docker/zfs/graph/2b845fff6676984d3e9be4922490d4c106ea01a4f43dc38a7923edeeafb1e4f8 zfs rw,relatime,xattr,posixacl 0 0
rpool/ROOT/ubuntu_093s22/var/lib/f3765cedd1d0d72ae55bf27f5abc3c3cd9a9d50525c496cb2570246fc2af5ab5 /var/lib/docker/zfs/graph/f3765cedd1d0d72ae55bf27f5abc3c3cd9a9d50525c496cb2570246fc2af5ab5 zfs rw,relatime,xattr,posixacl 0 0
rpool/ROOT/ubuntu_093s22/var/lib/ab7ba76477d9ae0cd4f527a84fe31c7a1e0e16cce29c933a0f87d84dd00f3900 /var/lib/docker/zfs/graph/ab7ba76477d9ae0cd4f527a84fe31c7a1e0e16cce29c933a0f87d84dd00f3900 zfs rw,relatime,xattr,posixacl 0 0
rpool/ROOT/ubuntu_093s22/var/lib/a9d4e261c13860ecd3a174c59ea1d54fc7f3c1da716904b5656b3232e27bb8f1 /var/lib/docker/zfs/graph/a9d4e261c13860ecd3a174c59ea1d54fc7f3c1da716904b5656b3232e27bb8f1 zfs rw,relatime,xattr,posixacl 0 0
rpool/ROOT/ubuntu_093s22/var/lib/a581a983d5193654a49939bfeee115f5aaaea95eb60d52dc83a67f1ed15d050d /var/lib/docker/zfs/graph/a581a983d5193654a49939bfeee115f5aaaea95eb60d52dc83a67f1ed15d050d zfs rw,relatime,xattr,posixacl 0 0
rpool/ROOT/ubuntu_093s22/var/lib/214a0c09660c60507acb6646cae857a77a93d61715e53856a77207d652d40b09 /var/lib/docker/zfs/graph/214a0c09660c60507acb6646cae857a77a93d61715e53856a77207d652d40b09 zfs rw,relatime,xattr,posixacl 0 0
rpool/ROOT/ubuntu_093s22/var/lib/4fb7e414f60316cc9361854863c4ff068b03a713c8aa40ac09028f70c642d57c /var/lib/docker/zfs/graph/4fb7e414f60316cc9361854863c4ff068b03a713c8aa40ac09028f70c642d57c zfs rw,relatime,xattr,posixacl 0 0
rpool/ROOT/ubuntu_093s22/var/lib/5b13bf585cae54512e63827237b0d6590b84c6b5813f45d7c11dc0a6d44cf0b3 /var/lib/docker/zfs/graph/5b13bf585cae54512e63827237b0d6590b84c6b5813f45d7c11dc0a6d44cf0b3 zfs rw,relatime,xattr,posixacl 0 0
rpool/ROOT/ubuntu_093s22/var/lib/e4f2324ac1aa3c4410db8d47963e7c4677dc7cdd97174c8e7243a9a42c500a2d /var/lib/docker/zfs/graph/e4f2324ac1aa3c4410db8d47963e7c4677dc7cdd97174c8e7243a9a42c500a2d zfs rw,relatime,xattr,posixacl 0 0
rpool/ROOT/ubuntu_093s22/var/lib/80b81067040ab97c6b643d5028427ba87e0246c9b1c70129bcd7c1a93790758e /var/lib/docker/zfs/graph/80b81067040ab97c6b643d5028427ba87e0246c9b1c70129bcd7c1a93790758e zfs rw,relatime,xattr,posixacl 0 0
/var/lib/docker/containers/ef0e87d9f5a3536c4568dda9bcfd29bef4eba1a6b8ecc8041fc00fe83ba063d5/mounts/secrets tmpfs ro,relatime 0 0
rpool/ROOT/ubuntu_093s22/var/lib/ad7b460cb00cbd493646b220fa2c321a511020d2c92c392d7e4238715f2e67b3 /var/lib/docker/zfs/graph/ad7b460cb00cbd493646b220fa2c321a511020d2c92c392d7e4238715f2e67b3 zfs rw,relatime,xattr,posixacl 0 0
rpool/ROOT/ubuntu_093s22/var/lib/2b637c98e9f70e83b79c5bc81786bc7e1cf570ab2082317195698f456d8e01a1 /var/lib/docker/zfs/graph/2b637c98e9f70e83b79c5bc81786bc7e1cf570ab2082317195698f456d8e01a1 zfs rw,relatime,xattr,posixacl 0 0
rpool/ROOT/ubuntu_093s22/var/lib/d2c058dd42662be1fda5a4a99fe9e504816022dd44d0be10cab41b30c1a2f352 /var/lib/docker/zfs/graph/d2c058dd42662be1fda5a4a99fe9e504816022dd44d0be10cab41b30c1a2f352 zfs rw,relatime,xattr,posixacl 0 0
rpool/ROOT/ubuntu_093s22/var/lib/165f26e5f75b990192d502fc204553a135494014b2db4372620b842eff01ef7d /var/lib/docker/zfs/graph/165f26e5f75b990192d502fc204553a135494014b2db4372620b842eff01ef7d zfs rw,relatime,xattr,posixacl 0 0
rpool/ROOT/ubuntu_093s22/var/lib/3377b690893ae14be14e8acc9c7ee10fa028566ae75f1bd9b4eabe9413afa469 /var/lib/docker/zfs/graph/3377b690893ae14be14e8acc9c7ee10fa028566ae75f1bd9b4eabe9413afa469 zfs rw,relatime,xattr,posixacl 0 0
rpool/ROOT/ubuntu_093s22/var/lib/ff111e82b564f4d6cc98825e8975a950c268725753b0c8a85b18cb7c3f226742 /var/lib/docker/zfs/graph/ff111e82b564f4d6cc98825e8975a950c268725753b0c8a85b18cb7c3f226742 zfs rw,relatime,xattr,posixacl 0 0
rpool/ROOT/ubuntu_093s22/var/lib/cacd3759ea772bff4c94ad9279238133228d677971a35b336b07873a5a18c6ea /var/lib/docker/zfs/graph/cacd3759ea772bff4c94ad9279238133228d677971a35b336b07873a5a18c6ea zfs rw,relatime,xattr,posixacl 0 0
rpool/ROOT/ubuntu_093s22/var/lib/1074fb95ec7125a1597b98e0af7b069872477cf4fb0e0a43592b2b32a29bc903 /var/lib/docker/zfs/graph/1074fb95ec7125a1597b98e0af7b069872477cf4fb0e0a43592b2b32a29bc903 zfs rw,relatime,xattr,posixacl 0 0
rpool/ROOT/ubuntu_093s22/var/lib/9892da09516140e03be49aae8f99e0ea796c5c82f1ceb4b745cdfdd83d671be6 /var/lib/docker/zfs/graph/9892da09516140e03be49aae8f99e0ea796c5c82f1ceb4b745cdfdd83d671be6 zfs rw,relatime,xattr,posixacl 0 0
rpool/ROOT/ubuntu_093s22/var/lib/8de6f5ead3428a74afde5788435667c015e73277486585f1a8af049b32a3024c /var/lib/docker/zfs/graph/8de6f5ead3428a74afde5788435667c015e73277486585f1a8af049b32a3024c zfs rw,relatime,xattr,posixacl 0 0
rpool/ROOT/ubuntu_093s22/var/lib/953d4e07479a6158d22a144cd396ebdc58690b6fd299466d892593658db63a40 /var/lib/docker/zfs/graph/953d4e07479a6158d22a144cd396ebdc58690b6fd299466d892593658db63a40 zfs rw,relatime,xattr,posixacl 0 0
rpool/ROOT/ubuntu_093s22/var/lib/9c28d6619084db68784c4d3c7e00eebba3b4cdcc150db05ee7e2ac044a658cc4 /var/lib/docker/zfs/graph/9c28d6619084db68784c4d3c7e00eebba3b4cdcc150db05ee7e2ac044a658cc4 zfs rw,relatime,xattr,posixacl 0 0 rpool/ROOT/ubuntu_093s22/var/lib/21b958291fbdb3ed5193d284c63887ff12528b18c6d7adbeda795fe4da53367f /var/lib/docker/zfs/graph/21b958291fbdb3ed5193d284c63887ff12528b18c6d7adbeda795fe4da53367f zfs rw,relatime,xattr,posixacl 0 0
rpool/ROOT/ubuntu_093s22/var/lib/90fe99beb22f3e542cf07b7a537d2dd5c0de69e8bdf4a021f8d5909fd3a7f6bf /var/lib/docker/zfs/graph/90fe99beb22f3e542cf07b7a537d2dd5c0de69e8bdf4a021f8d5909fd3a7f6bf zfs rw,relatime,xattr,posixacl 0 0
rpool/ROOT/ubuntu_093s22/var/lib/dd8db6f24eea225a890a970d969e82669f20ec656595d239a5e3ee833ebd2544 /var/lib/docker/zfs/graph/dd8db6f24eea225a890a970d969e82669f20ec656595d239a5e3ee833ebd2544 zfs rw,relatime,xattr,posixacl 0 0
rpool/ROOT/ubuntu_093s22/var/lib/be2248d729c3550116544ddabfbd0d86626422d7cedb307e2bcb6bddd345cf1b /var/lib/docker/zfs/graph/be2248d729c3550116544ddabfbd0d86626422d7cedb307e2bcb6bddd345cf1b zfs rw,relatime,xattr,posixacl 0 0 rpool/ROOT/ubuntu_093s22/var/lib/1e55c38f1396c24baffdf3ea6f38338b499758443acbaf161f9b372979730e29 /var/lib/docker/zfs/graph/1e55c38f1396c24baffdf3ea6f38338b499758443acbaf161f9b372979730e29 zfs rw,relatime,xattr,posixacl 0 0 rpool/ROOT/ubuntu_093s22/var/lib/dafeb16f3dff99d189ad527b4d75d9dbd2220e25a6ed1ff9360a83cdad8a5b2d /var/lib/docker/zfs/graph/dafeb16f3dff99d189ad527b4d75d9dbd2220e25a6ed1ff9360a83cdad8a5b2d zfs rw,relatime,xattr,posixacl 0 0
 shm /var/lib/docker/containers/f48e3f6ad9a6dc0760de39a06b998d6160c9bf6834dbdffa23f5cb076c618742/mounts/shm tmpfs rw,nosuid,nodev,noexec,relatime,size=65536k 0 0
 tmpfs /var/lib/docker/containers/b7ccabe87cf0b83f73101ab6ea28ae3c8c554aaee501c678c8bc96131641e701/mounts/secrets tmpfs ro,relatime 0 0
rpool/ROOT/ubuntu_093s22/var/lib/948cd69a660206b92481a39339f27f312363d61a7aa5142e4be3b3c6762f85f6 /var/lib/docker/zfs/graph/948cd69a660206b92481a39339f27f312363d61a7aa5142e4be3b3c6762f85f6 zfs rw,relatime,xattr,posixacl 0 0
rpool/ROOT/ubuntu_093s22/var/lib/ce61f7db130c0ffea66b04c42ddb622dd155e4030007717db233ed3e1304f9d9 /var/lib/docker/zfs/graph/ce61f7db130c0ffea66b04c42ddb622dd155e4030007717db233ed3e1304f9d9 zfs rw,relatime,xattr,posixacl 0 0 rpool/ROOT/ubuntu_093s22/var/lib/4992fc79bddb589290b989958f9fc3cabb47b15fad11d73ff372c76481980380 /var/lib/docker/zfs/graph/4992fc79bddb589290b989958f9fc3cabb47b15fad11d73ff372c76481980380 zfs rw,relatime,xattr,posixacl 0 0
rpool/ROOT/ubuntu_093s22/var/lib/55c4ea30702891d9a19a40cd95021279ad09c2b442e36e726a2abc459db689dd /var/lib/docker/zfs/graph/55c4ea30702891d9a19a40cd95021279ad09c2b442e36e726a2abc459db689dd zfs rw,relatime,xattr,posixacl 0 0
/var/lib/docker/containers/552d81abc3bb96f5d4c2f38a2a5dbdc589ee61ce73ce031b6629a537c11edac7/mounts/shm tmpfs rw,nosuid,nodev,noexec,relatime,size=65536k 0 0
 nsfs /run/docker/netns/2969a86d397b nsfs rw 0 0
/var/lib/docker/containers/5c53cdc3f8232dc13ce2778dcd1259f16575734bb3f2ac50d67e4e0e12911671/mounts/shm tmpfs rw,nosuid,nodev,noexec,relatime,size=65536k 0 0
rpool/ROOT/ubuntu_093s22/var/lib/4243412321eaf68d83ce9ff2f3e6f8fd6373bd962e8799e978bc2aa613d58bb7 /var/lib/docker/zfs/graph/4243412321eaf68d83ce9ff2f3e6f8fd6373bd962e8799e978bc2aa613d58bb7 zfs rw,relatime,xattr,posixacl 0 0

This creates timeouts and very slow ZFS related commands on the system

Describe the results you expected:

I think docker should clean up for stopped containers the ZFS datasets that it creates.

Output of docker version:

Client:
 Version:           19.03.8
 API version:       1.40
 Go version:        go1.13.8
 Git commit:        afacb8b7f0
 Built:             Wed May 27 06:28:33 2020
 OS/Arch:           linux/amd64
 Experimental:      false

Server:
 Engine:
  Version:          19.03.8
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.13.8
  Git commit:       afacb8b7f0
  Built:            Tue May 19 09:01:22 2020
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.3.3-0ubuntu2
  GitCommit:        
 runc:
  Version:          spec: 1.0.1-dev
  GitCommit:        
 docker-init:
  Version:          0.18.0
  GitCommit:        

Output of docker info:

Client:
 Debug Mode: false

Server:
 Containers: 2
  Running: 0
  Paused: 0
  Stopped: 2
 Images: 2
 Server Version: 19.03.8
 Storage Driver: zfs
  Zpool: rpool
  Zpool Health: ONLINE
  Parent Dataset: rpool/var/lib/docker
  Space Used By Parent: 378417152
  Space Available: 510797041664
  Parent Quota: no
  Compression: lz4
 Logging Driver: json-file
 Cgroup Driver: cgroupfs
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: 
 runc version: 
 init version: 
 Security Options:
  apparmor
  seccomp
   Profile: default
 Kernel Version: 5.4.0-33-generic
 Operating System: Ubuntu 20.04 LTS
 OSType: linux
 Architecture: x86_64
 CPUs: 8
 Total Memory: 15.5GiB
 Name: casanier
 ID: DMLS:AI7V:3SIT:XS3M:5BP5:XHA4:QSRB:DSHI:YWBJ:WZCX:MNAS:YKIO
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false

WARNING: No swap limit support
@didrocks didrocks changed the title docker ZFS driver creates hundreds of datasets and don’t clean them docker ZFS driver creates hundreds of datasets and doesn’t clean them Jun 2, 2020
@AkihiroSuda AkihiroSuda added area/storage/zfs kind/bug Bugs are bugs. The cause may or may not be known at triage time so debugging may be needed. and removed kind/bug Bugs are bugs. The cause may or may not be known at triage time so debugging may be needed. labels Jun 2, 2020
@AkihiroSuda
Copy link
Member

goes quickly out of control as people don’t remove stopped containers (via docker rm)
..
I think docker should clean up for stopped containers the ZFS datasets that it creates.

No, this breaks docker start.

@thaJeztah
Copy link
Member

Yeah, we can't remove containers that are stopped, as the container isn't gone. If your use-case is to started "one-off" containers, but don't care about them after they've exited, you could either start the containers with the --rm option (which should automatically remove them after they exit), or us docker container prune to remove stopped containers (or docker system prune to cleanup other unused resources as well)

@didrocks
Copy link
Author

didrocks commented Jun 2, 2020

I understand, this is a complex issue then. Note that this is not my use case, but rather what our users on ubuntu reports.

The main issue is that a lot of tools are listing the ZFS datasets and as docker is creating a bunch of them when the user never --rm them (do you have any kind of garbage collection for stopped container that didn’t run for X period of time?) and docker ps doesn’t show them by default, they think that those are gone. Most of users reporting this issue of too many datasets were not even aware this was docker creating them (nothing in the name is explicit) and thought that was another built-in backup functionality.

Do you have any idea how we can mitigate this? Maybe the ZFS driver should be explicitely enabled by the user so that they know what their gain is and tradeoffs?

@AkihiroSuda
Copy link
Member

zsys should just ignore Docker (, containerd, cri-o, and LXD) datasets

@didrocks
Copy link
Author

didrocks commented Jun 2, 2020

Correct, hence my question, before reading (listing all the properties to get the correct one), how do I know that a given dataset will be a docker one?

@AkihiroSuda
Copy link
Member

The first step is to hardcode /var/lib/docker.

Eventually zsys should define convention to ignore fs that has a specific file, e.g. "/.zsysignore".

@didrocks
Copy link
Author

didrocks commented Jun 2, 2020

The first step is to hardcode /var/lib/docker.

Look at the example I gave above: there is no /var/lib/docker in rpool. Those are set as mountpoints by your driver. However, to get the mountpoint, you have to read the properties, and hence, getting the timeout. ZFS datasets name don’t necessarilly match mountpoint path. All that is even before mounting them (so nothing related to a .zsysignore in a dataset).

@thaJeztah
Copy link
Member

do you have any kind of garbage collection for stopped container that didn’t run for X period of time?

No, docker does not automatically garbage collect those. Creating containers (without starting them), or keeping stopped containers around is used in various use-cases, so we cannot assume those containers can be removed.

Maybe the ZFS driver should be explicitely enabled by the user so that they know what their gain is and tradeoffs?

The default storage drivers for most situations is overlay2 (overlayFS), but if the backing filesystem is zfs or btrfs, then docker defaults to using those (in that case, it assumes the user setup those filesystems also to manage container filesystems); perhaps this could be changed for a future version, but not sure if overlayFS on top of either zfs or btrfs works (🤔)

@didrocks
Copy link
Author

didrocks commented Jun 2, 2020

The default storage drivers for most situations is overlay2 (overlayFS), but if the backing filesystem is zfs or btrfs, then docker defaults to using those (in that case, it assumes the user setup those filesystems also to manage container filesystems); perhaps this could be changed for a future version, but not sure if overlayFS on top of either zfs or btrfs works (thinking)

Yeah, unfortunately (or fortunately ;)), we offer now a very simple way for people to install ZFS system on their ubuntu system and they (obviously from the bugs we received) don’t know that docker is going to use it and even how it uses it.

They all experience the same effects: after a while (I think after having started some hundreds of containers without removing them), their whole system is slow to boot (mounting all datasets at boot), zfs and zpool commands are slow and so on. So, this is completely independent of them using ZSys or not. We patched the docker package to migrate them in rpool/var/lib/docker to avoid snapshotting them automatically and thus, creating even more datasets.

There is obviously something to fix to avoid this behavior, but as you told, this isn’t obvious. I wonder what’s the gain in the docker case by creating one dataset for each containers (I think the idea is to have the diff between the base image and mount on top of that another dataset). I don’t know either if overlayfs works well on top of a pure ZFS system.

@thaJeztah
Copy link
Member

I'm definitely not a zfs expert, so if anyone knows;

  • if overlayfs on top of a zfs filesystem would be problematic (or "just works")
  • if there's ways to somehow label zfs datasets to identify them as being created by docker/containerd/cri-o/lxd

Then that information would be welcome 😅

@AkihiroSuda
Copy link
Member

ovl on zfs is here: openzfs/zfs#9414

@thaJeztah
Copy link
Member

Thanks for linking that!

I guess it wouldn't "solve" the full issue if users don't cleanup containers, but we could consider defaulting to overlayfs and make zfs an opt-in option.

@didrocks
Copy link
Author

didrocks commented Jun 2, 2020

I guess it wouldn't "solve" the full issue if users don't cleanup containers, but we could consider defaulting to overlayfs and make zfs an opt-in option.

I think this is the best course of action! 👍

@AkihiroSuda
Copy link
Member

ovl-on-zfs PR isn't merged yet and unlikely going to be available to 20.04 LTS users, so probably we should have some workaround on zsys side

@didrocks
Copy link
Author

didrocks commented Jun 2, 2020

@AkihiroSuda: we have a workaround on the docker package side for now, but it’s not enough.

As explained a couple of posts ago, this has nothing to do with ZSys, but rather the whole boot experience and various zpool and zfs commands due to this huge number of datasets created.

We can work with ZFS upstream to have ovl-on-zfs merged (and this will be likely be backported to our LTS if we deem this important enough)

@Lockszmith-GH
Copy link

OverlayFS over ZFS would probably work (eventually) but it isn't a great idea.
That is because ZFS's layering (via snapshots and clones) is far more effective.

Most likely what is 'killing your system' are the snapshots created by zsys whenever an update is applied (via apt-get).
And so that is the solution @didrocks is currently implementing.

zfs improves performance (even when zsys is around) as long as it is properly used - and using OverlayFS - might improve boot time, but ZFS (used correctly) will improve everything.

Hope this helps.

@snajpa
Copy link

snajpa commented Jun 25, 2020

@didrocks if you guys have some time to pitch in with some help writing some ZFS tests, that's actually all that's blocking that work to be merged.

One major TODO would be OverlayFS tests themselves, which I am not sure that I'm competent enough to devise the methodology for,
the other one are tests of the ZFS rollback/receive from the Linux VFS perspective (to make sure apps don't crash and that everything happens on a live mounted rollback/receive, as expected).

Any and all help in that direction is greatly appreciated.

@Rain
Copy link

Rain commented Jul 10, 2020

I think the issue I had with zsys and Docker is related to this: If zsys automatically creates a snapshot of a running container (or related image/layer), when that container is stopped & removed, docker attempts to remove all relevant datasets. The datasets can't be removed since they still have snapshots (effectively, zfs destroy -r ... would have been required). At this point docker treated the containers as removed, but noted "Removal In Progress" in docker ps -a.

At first I tried manually removing all of the zsys snapshots and removing the containers again, but docker didn't try to remove the ZFS datasets again (they should have already been removed, the command to remove them had already been issued; docker was just waiting for the datasets to disappear).

I didn't have time to mess with it and instead just destroyed everything under /var/lib/docker and created a new dataset on another pool (where zsys isn't automatically creating snapshots).

Personally, I don't think the above issue issue is with Docker as much as it is with zsys not having an option to ignore particular datasets. An option in zsys similar to com.sun:auto-snapshot=false would be helpful in this case. Is there even a valid use-case for creating snapshots (zsys or otherwise) of Docker containers/images/ect? I can't really think of any, but I'm relatively new to Docker.

@didrocks
Copy link
Author

@didrocks if you guys have some time to pitch in with some help writing some ZFS tests, that's actually all that's blocking that work to be merged.

Sorry, what work is pending merging? Happy to see if we can spend some time helping you with this.

@Rain: this isn’t exactly the case, there is no way for ZSys to know that a particular dataset is a container or not, having no property set on it. Note that anyone can create backup, not only ZSys (there is a bunch of sysadmin snapshot tools that people installs) and the result would be exactly the same.
For ZSys, on ubuntu, we now move it away from the rpool system datasets and move it to a persistent place, which fixes ZSys (if docker is installed from our repo), but that doesn’t fix other tools.

@snajpa
Copy link

snajpa commented Jul 15, 2020

@didrocks openzfs/zfs#9600 (comment)

Thank you for your offer! ;)

Edit: also, the OverlayFS tests themselves in openzfs/zfs#9414 (currently @openzfs/zfs@5ce120c) are pretty weak. Those also need a bit of attention & love.

@davidgreystahl
Copy link

davidgreystahl commented Nov 1, 2020

I ran into this issue when trying to clean up my docker containers. For removal of a docker container wouldn't one of the solutions be to have the docker zfs storage know to use:

zfs destroy -R rpool/ROOT/ubuntu_/var/lib/

and take the snapshots of the out with the dataset that is being removed as a part of the docker rm command?

I'll see if I can make that change locally in my environment [most likely to take me a couple of years - I'm old and slow].

Likewise, couldn't there I script to cleanup snapshots on docker datasets? List all the docker datasets, find all the zfs snapshots on those datasets, remove those snapshots? Will see if I can figure that out.

That would give a workaround to slow performance due to all the snapshots.

@kraduk
Copy link

kraduk commented Mar 10, 2021

The way I get round this issue to use overlay ontop of zfs is to create a zvol on the relevant pool, format it as ext4, then mount it on /var/lib/docker. Then you get the best of all worlds. eg

zfs create -V 10G rpool/docker
mkfs.ext4 /dev/rpool/docker
mount /dev/rpool/docker /var/lib/docker
tail -1 /etc/mtab >> /etc/fstab

@exstral
Copy link

exstral commented May 24, 2021

The solution posted by @kraduk worked better for me like this with sudo, might help someone else :)

sudo zfs create -V 20G rpool/docker
sudo mkfs.ext4 /dev/rpool/docker
sudo mkdir -p /var/lib/docker
sudo mount /dev/rpool/docker /var/lib/docker
sudo tail -1 /etc/mtab | sudo tee -a /etc/fstab

@almereyda
Copy link

almereyda commented Aug 24, 2021

Since the discussion in #40132 recently continued, and we cannot make zsys alone responsible for eventual snapshots of (already in their own sense immutable) container datasets produced by the ZFS storage driver, where other ZFS auto-snapshotting infrastructures will yield the same effect, the request here may be to allow Docker force removing the datasets it created itself.

Like @Rain suggested above, and as we have shown in ubuntu/zsys#200 (comment), the Docker ZFS driver suggests itself to delete the datasets with -R:

cannot destroy 'rpool/ROOT/ubuntu_vno50r/var/lib/docker/34f57f25cd7a91ddade2dac9454eea665c23408422f2059eb33cde86971082ac': filesystem has dependent clones
use '-R' to destroy the following datasets:

Maybe this flag can become the default behaviour of the ZFS driver, for allowing Moby/Docker autonomy over its very own datasets, for it not to be distracted with the side-effects of eventual third-party snapshots?

@nergdron
Copy link

nergdron commented Oct 5, 2022

just ran into this on some hosts after upgrading to ubuntu 22.04. would love for @almereyda's comment to be implemented, even just with a docker zfs driver option, to force removal of dependent snapshots when removing a docker container. that at least would be sufficient for this to stop breaking systems.

@callumgare
Copy link

This isn't a proper workaround but for anyone who'd just like to able to check zfs list easily without having to sift though hundreds for docker datasets I've been using sudo zfs list | grep -v -E '[0-9a-f]{64}' to fillter them out.

@kraduk
Copy link

kraduk commented Feb 10, 2023 via email

@nonlinearsugar
Copy link

nonlinearsugar commented Feb 10, 2023

Not an ideal solution, but it does the trick for now. I've been running it every 24h for a few weeks and it's working well.

  1. This script scans for the docker datasets. Once it finds them, it disables snapshotting on them (which solves this problem into the future) and destroys any existing snapshots (which solves the problem that has been created up to now).
  2. It also prunes old docker images.

This script is dangerous. It will not work on your system unless you modify it first.

  1. Install this first: https://github.com/bahamas10/zfs-prune-snapshots
  2. Design the regular expression match to target your setup. I have a dedicated filesystem called "var-lib-docker" on my pool "mainpool" for all my docker datasets. Mine look like:
    "mainpool/var-lib-docker/cf275df392ce2bb98d963de6274e231b589caa26563edbf93a9b7fef302dddf1"
    "mainpool/var-lib-docker/cf275df392ce2bb98d963de6274e231b589caa26563edbf93a9b7fef302dddf1-init"

For my system/my example, the following regex works:
zfsDatasets=$(zfs list -o name | grep --extended-regexp 'mainpool\/var-lib-docker\/([a-z]|[0-9]){64}$|mainpool\/var-lib-docker\/([a-z]|[0-9]){64}-init$')

Verify the match by running this. It will give you a list of the datasets it intends to disable and destroy snapshots on.

for zfsDataset in $zfsDatasets
do
    echo $zfsDataset
done
  1. Update the "zfsDatasets=" line in the script below, and give it a rip. If you've thousands of snapshots, it'll operate for a while. It takes a few seconds per snapshot while not being IO intensive so the script is designed to operate on each dataset in parallel.
#/bin/bash

docker image prune --all --force --filter "until=168h"

zfsDatasets=$(zfs list -o name | grep --extended-regexp 'mainpool\/var-lib-docker\/([a-z]|[0-9]){64}$|mainpool\/var-lib-docker\/([a-z]|[0-9]){64}-init$')

for zfsDataset in $zfsDatasets
do
    zfs set com.sun:auto-snapshot=false $zfsDataset &
done

for zfsDataset in $zfsDatasets
do
    zfs-prune-snapshots 0s $zfsDataset &
done

@kraduk
Copy link

kraduk commented Feb 10, 2023 via email

@nonlinearsugar
Copy link

It's been about 3 weeks since I went through all the RND but if I recall correctly, prune only worked on unused images and the problem of accumulating datasets was impacting my running containers. Prune is a good maintenance item for docker generally but the script I wrote specifically addresses the problem of accumulating snapshots whether containers are running or not.

@kraduk
Copy link

kraduk commented Feb 10, 2023 via email

@darkpixel
Copy link

@kraduk > That will help but unfortunately not deal with the slow down from so many datasets

So don't use the ZFS storage driver. Use the normal one and point it to /tank/docker or whatever.

No need to create datasets for every container if you feel it's slow.

I have a ~20 TB array consisting of 8 * 4 TB drives, 1 * 2 TB SSD cache drive, and 2 * 250 GB SSD mirrored log drives.

I have thousands of datasets and several thousand snapshots on each.

zfs list -rt snapshot -o name -s creation takes about 3 seconds.
zfs list -o space,compression,compressratio,encryption,keystatus returns instantly.

@malventano
Copy link

@darkpixel it may be fast for you, but clearly it is not fast for others. 1281 docker datasets here and it takes >3s to list. It's not just about waiting a few seconds, as the overhead delays any scripts pulling dataset stats, etc.

@darkpixel
Copy link

darkpixel commented Feb 11, 2023

@darkpixel it may be fast for you, but clearly it is not fast for others. 1281 docker datasets here and it takes >3s to list. It's not just about waiting a few seconds, as the overhead delays any scripts pulling dataset stats, etc.

There are actually two parts to this bug. One if the performance issue, the other part is "not cleaning up datasets". One is fixed by changing your ZFS setup, adding RAM, a bigger CPU, faster disks, etc...the other can be worked-around temporarily (until part of the bug is fixed) by pointing docker to a storage location (that can be ZFS-backed or not) but doesn't use the ZFS driver. That's what I do on all my boxes because they don't have non-ZFS storage. I create tank/docker and do dockerd --data-root /tank/docker. I do that because I hate seeing a million datasets under tank/docker when I do a zfs list. I end up running zfs list and then facepalmming and running zfs list | grep -v tank/docker.

@timkgh
Copy link

timkgh commented Feb 11, 2023

ext4 on a zvol works great for me. It uses the overlay2 storage driver.
#41055 (comment)

@kraduk
Copy link

kraduk commented Feb 11, 2023 via email

@satmandu
Copy link

I generally do that, but specifically as I want to use overlayfs driver and
don't mind loosing some of the zfa features. This may not be desirable in
many cases, eg people who have a requirement of snapshotting. Although I'm
not 100% sure why they would have so much state in their containers.

Overlayfs works great with zfs as long as you use a version from the openzfs/master branch which will become OpenZFS version 2.2...

@ams-tschoening
Copy link

ams-tschoening commented Feb 20, 2023

I think the issue I had with zsys and Docker is related to this: If zsys automatically creates a snapshot of a running container (or related image/layer), when that container is stopped & removed, docker attempts to remove all relevant datasets. The datasets can't be removed since they still have snapshots (effectively, zfs destroy -r ... would have been required). At this point docker treated the containers as removed, but noted "Removal In Progress" in docker ps -a.

Same problem here with zfs-auto-snapshot on Proxmox 7.3 based on Debian Bullseye. Before using Docker, I had only very few manually created datasets for special purposes and wanted to auto-snap all of those by purpose. I've created a dataset for Docker as well, configured data-root in its JSON config file to use that and would like to use auto-snaps for that one dataset as well by default. But am somewhat sure to not need additional snaps for each and every container and image layer.

Don't like using ZVOL with additional EXT4 and can't use openzfs/master, so a workaround might be to opt-in into auto-snap some datasets only. In my case those are less than what Docker creates:

Alternately, check out the --default-exclude description:

By default zfs-auto-snapshot will snapshot all datasets except for those in which the user-property com.sun:auto-snapshot is
set to false. This option reverses the behavior and requires com.sun:auto-snapshot to be set to true.

https://askubuntu.com/a/1119949/512299

@darkpixel
Copy link

Same problem here with zfs-auto-snapshot´ on Proxmox 7.3 based on Debian Bullseye. Before using Docker, I had only very few manually created datasets for special purposes and wanted to auto-snap all of those by purpose. I've created a dataset for Docker as well, configured data-root` in its JSON config file to use that and would like to use auto-snaps for that one dataset as well by default. But am somewhat sure to not need additional snaps for each and every container and image layer.

I recommend checking out https://github.com/jimsalterjrs/sanoid.

You can set how/when snapshots are taken fairly easily and you can exclude certain datasets.
The only down-side appears to be that it doesn't to "atomic" snapshots (i.e. zfs snapshot -r tank/my-virtual-machines). It appears to snapshot each dataset individually.

Anyways, I only have ZFS filesystems available, and for operational reasons I have to use the zfs storage provider, I add this to my sanoid.conf file:

[template_prune] 
autoprune = yes 
monitor = no
frequent_period = 15
yearly = 0
monthly = 0
weekly = 0
daily = 0
hourly = 6
frequently = 8
prune_defer = 0
autosnap = 0
hourly_min = 0
daily_hour = 0
daily_min = 0
weekly_wday = 0
weekly_hour = 0
weekly_min = 0
monthly_mday = 1
monthly_hour = 0
monthly_min = 0
yearly_mon = 1
yearly_mday = 1
yearly_hour = 0
yearly_min = 0

[tank/docker]
recursive = zfs
use_template = prune

@ams-tschoening
Copy link

ams-tschoening commented Mar 3, 2023

According to the docs, recursive = zfs is atomic by ZFS for all contained datasets. That's why I wonder how your config ignores Docker? Looking at the code and my config it shouldn't and something like the following seems to work instead:

[rpool]
        use_template    = prod
        recursive       = yes

[rpool/data/encr/docker]
        use_template    = prod
        skip_children   = yes
rpool/data/encr/docker@autosnap_2023-03-03_20:36:10_weekly                                                 0B      -     33.6G  -
rpool/data/encr/docker@autosnap_2023-03-03_20:36:10_daily                                                  0B      -     33.6G  -
rpool/data/encr/docker@autosnap_2023-03-03_20:36:10_hourly                                                 0B      -     33.6G  -
rpool/data/encr/docker@autosnap_2023-03-03_20:36:10_frequently                                             0B      -     33.6G  -
rpool/data/encr/docker/0831de625e9cbe9287f39540cc6911dd63b074b8c8583b48bc0c775e94e5754b@620731414          0B      -     1.99G  -
rpool/data/encr/docker/14eb835092bc10d2947bdd38fd01dac2cb0e8bf71104f36ccab41f1f4ab8abb4-init@582353188     0B      -      122M  -
rpool/data/encr/docker/1cac7f11507bb011295c4ac7ea6177c69fed63ef89c467496d1298283ace701a@868749595          0B      -     99.9M  -
rpool/data/encr/docker/47246c4b5513610b2e2b4e778663e4e1d9260d6d71464b55818750dc1c22a30f@538809144          0B      -      122M  -
rpool/data/encr/docker/4bff0872a87b0483d412d19cdcf8dc91091b07d57a7068992d6ffe5894a42a7c@99641705           0B      -     99.9M  -
rpool/data/encr/docker/544545034e033e3d4c1b5e9288af11202d12a7a04776dcedbe0d958fcf2aa5e7@259339700          0B      -     55.0M  -
rpool/data/encr/docker/56906b49f73ebd7cb82305d43f6d0d6e27f6d242b2a955f3b64603f27011ee9d@667933448          0B      -      384K  -
rpool/data/encr/docker/77a1f00cc6587268d9010947eaa73824a2aa76de45e69124b674745fa91e43a0@423240362          0B      -      392K  -
rpool/data/encr/docker/802aeb54c6bcc2420e9e31e78bd84fac2ff9b71dd6b1eb46c423469028f83c5b@784872630          0B      -     99.9M  -
rpool/data/encr/docker/a1c6c04bb3827d3f1254ad67f4f0d9c54cf78675c6a9e904711bfedd9d429691-init@666644542     0B      -     1.99G  -
rpool/data/encr/docker/baa39c98d04454f92bdf3b0e0984a7e6ee493e25c54759b7d333c7755a7bd3d1@943746089          0B      -     99.9M  -
rpool/data/encr/docker/e4fbb84616fc59d82a6ee5cfd52fcefc39a9ebf50be14db6d6a92c730d42c304@466462406          0B      -      122M  -
rpool/data/encr/docker/e60a6afa843ba0341e9fc53e605f6f6a10f024721a5a2116f05ef4ea24a6b482@19380794           0B      -     99.9M  -
rpool/data/encr/docker/f0f3120081809e0ea04cbaed0a15a68e0bfa61b39c50ef653232f41253fb52db@380690389          0B      -     99.9M  -
rpool/data/encr/home@zfs-auto-snap_weekly-2023-01-08-0547

Another example from the Sanoid issue tracker:

[socks]
        use_template = production
        recursive = yes

[socks/docker]
        use_template = none
        recursive = yes

[template_production]
        frequently = 4
        hourly = 36
        daily = 30
        monthly = 3
        yearly = 0
        autosnap = yes
        autoprune = yes

[template_none]
        autosnap = no
        autoprune = no
        frequently = 0
        hourly = 0
        daily = 0
        monthly = 0
        yearly = 0

@wenerme
Copy link

wenerme commented Mar 4, 2023

finally, zfs 2.2 will support overlay

@thoernle
Copy link

thoernle commented Mar 9, 2023

finally, zfs 2.2 will support overlay

What does this mean? The datasets created are no longer listed?

@wenerme
Copy link

wenerme commented Mar 9, 2023

finally, zfs 2.2 will support overlay

What does this mean? The datasets created are no longer listed?

No need to use the zfs driver

@kraduk
Copy link

kraduk commented Mar 9, 2023 via email

@satmandu
Copy link

FYI OpenZFS 2.2.0 is out now, which supports Overlay2!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests