-
Notifications
You must be signed in to change notification settings - Fork 142
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support (S3, GCP, Azure) storage classes #737
Support (S3, GCP, Azure) storage classes #737
Conversation
Quality Gate passedIssues Measures |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hello @mohammad-aburadeh and sorry to keep you waiting for so long. I've finally managed to have a proper look at this and tried to test it out. Sadly, I have bad news.
In the context of S3, we need to limit the sotrage classes we support. It doesn't seem to be possible to read from Glacier and below, so we should not let Medusa write those because it won't be able to read after itself.
In the context of GCS, the explicit storage classes don't seem to work at all. The header seems just ignored. I didn't find a way to do this aside from setting a default value on the entire bucket.
In the context of Azure, we need to pass enums, not strings, because that's what the client we use expects.
Because of this, I'm sorry to return the PR to you and kindly ask you to do one more iteration to fix/improve things.
resp = await self.gcs_storage.upload( | ||
bucket=self.bucket_name, | ||
object_name=object_key, | ||
file_data=data, | ||
force_resumable_upload=True, | ||
timeout=-1, | ||
headers=ex_header, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I could not get this to work with GCS. All the uploads I did ended up with the Standard storage class. Swithich the default bucket storage class mode to Managed with Autoclass
did not help either.
It's like the GCS ignores the HTTP header. Even if I set it in the request, the response comes with 'storageClass': 'STANDARD',
hi everyone, any help needed here? this feature is very interesting and would like to try using it ASAP |
Hi,
The backup was successful. But it is taking 1 hour to complete. The previous backups would finish within 2-5minutes. We observed that the manifest.json file is taking more time. Can you please let us know what might be the issue? |
I've implemented the suggested changes and added integration tests over at https://github.com/thelastpickle/cassandra-medusa/pull/777/checks |
Hi, Below are the tests done on the cluster: Old Medusa Version: 0.17.2 Test1 (New Medusa version): Test2 (old Medusa version): Test3 (old Medusa version): Test4 (old Medusa version): Can you please let us know why the New Medusa version with STANDARD_IA is taking more time? Thanks, |
Medusa does not support specifying the storage class name when uploading backups to S3/GCP/Azure.
This is very important for many customers as it can help to reduce the storage cost.
Closes #568