-
Notifications
You must be signed in to change notification settings - Fork 28
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can we used minio as S3 compatible for apache iceberg #6
Comments
We have not used Minio yet, but as we can see in the Minio documentation that it is compatible with S3 APIs and also configurable with Spark applications, so it should work fine with Cuelake as well. Steps for custom configurations will be updated soon, we are still figuring out the best way to support custom configurations. Will keep this issue open until we update the documentation for custom configurations like this. |
if We can use minio with cuelake, what about AWS glue. Can we use Hive Metastore ?? |
We still confused how zeppelin connect to spark cluster ??
|
Yes, you can use both AWS Glue and Hive as metastore for Iceberg. Cuelake's default configuration is hive metastore with postgres as backend database.
|
@vikrantcue , thank you for your confirmation. Looking forward to hear update for minio sucessful integration and tested with cuelake, If it possible, minio endpoint config in the configmap for better user experience, Cuelake can be one of the fastest etl/elt due to spark cluster and iceberg with object storage, We are looking forward for minio update, thank you |
Notebook Objects and edit functionality
Is your feature request related to a problem? Please describe.
Can we used minio as S3 compatible for apache iceberg
Describe the solution you'd like
Can we used minio as S3 compatible for apache iceberg
Describe alternatives you've considered
If we can use minio, need the steps to configure minio with cuelake
Additional context
Can we used minio as S3 compatible for apache iceberg
The text was updated successfully, but these errors were encountered: