-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Epic] Provide local filesystem-backed mock S3 service #67
Comments
@Josh-Matsuoka if you're interested in pursuing this, I think the adobe/S3Mock is probably the first lead to chase down. I imagine something like running that either in-process or as a sidecar process launched by cryostat3 if there is no other config for an S3 provider. Ideally if the user doesn't specify an S3 provider by SmallRye Config properties (ex. env var) then they just get S3Mock with no additional configuration required. This should log a warning message to tell the user not to rely on this as it isn't actually production grade, and tell them to use the relevant config property/properties to set up a real S3 provider. |
https://github.com/findify/s3mock looks pretty similar to Adobe/s3mock so also worth investigating if the Adobe one doesn't end up meeting our requirements for whatever reason. Adobe one is Java with Apache license and findify is Scala with MIT license, so we're good to go with either from that angle. |
Eventually we would want to use this: https://container-object-storage-interface.github.io/ This would allow us to use the Bucket/BucketClaim k8s resources in the Operator for provisioning object storage and provide that to the Cryostat instance generically. K8s COSI entered alpha state in k8s v1.25 and I don't see a ton of movement on it since, and the docs are very incomplete. From what docs do exist it looks like it will still rely on the actual providers to have a "datapath API", which would mean... "S3-compatible". So in the meantime the best we can really do is target the S3 API and let the user plug in a URL to their S3 provider, which might be AWS S3 or any of various providers (cloud service or self-hosted) of S3-compatible object stores. |
https://github.com/seaweedfs/seaweedfs looks interesting as well - it is also runnable as a single container, is Apache licensed, is golang, and looks very well supported. Seems overall similar to Minio in many ways for our purposes. |
I think this can be closed now as we have determined a solution to move forward with. We will use an existing S3-compatible container image in all deployment scenarios, configured with a PersistentVolumeClaim in k8s to ensure the storage is persistent and durable, but this storage instance will (at least to begin with) be hidden within the Pod and not be interchangeable by end users. |
For users who do not have an external S3 provider available, Cryostat 3 should include some barebones implementation that simply writes through to local filesystem (ex. k8s PVC as in prior Cryostat versions). Ideally this would be done in-process but it might be a separate process running in the same container, or perhaps as a separate sidecar container in the same pod.
Potential leads:
The text was updated successfully, but these errors were encountered: