You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you for this CSI, I have tested it both with Minio and AWS S3 and it works like a charm.
However, there is an issue which also persists in csi-s3 - if controller or daemonset pod restarts - you cannot longer access any mounted directory until you restart that pod manually.
That's a really cool workaround, btw. I was toying with the idea of remounting previous mounts after container crash, but it means credentials and mountpoints have to be stored on some persistent drive. And I'm still not sure mount itself still persists. And it would mean some development time is required.
Another idea I had was using golang native bindings from librclone (if I understood corectly what it does) instead of binary fuse mount and that might work even better. Still, not sure how to use (even the smallest golang example would work) it and it would require def time.
That would be awesome @sarendsen ! Even a simple example wrapper would do for initial tests, to see how it works with csi rpc calls.
When i have to upgrade this plugin, i just kill pods mounting the storage class and it remounts them. It does not happen too often and I have not seen it crash on it's own.
Hello,
Thank you for this CSI, I have tested it both with Minio and AWS S3 and it works like a charm.
However, there is an issue which also persists in csi-s3 - if controller or daemonset pod restarts - you cannot longer access any mounted directory until you restart that pod manually.
Related: ctrox/csi-s3#34
The text was updated successfully, but these errors were encountered: