-
Notifications
You must be signed in to change notification settings - Fork 79
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Sort Key for GSI out of sync with Cloudformation Template #2707
Comments
It's worth noting that I also have an AWS Support ticket (#172062706600026) open with CloudFormation. They have advised for me to open a ticket here. |
Hi @matt-at-allera thanks for raising this. It doesn't seem to be related to the amplify-js library so I will transfer it to the amplify-category-api repo for better support. |
Hi @matt-at-allera, Can you confirm you're using version 12.11.1 of the Amplify CLI you're using? (i.e., What is the output of
Amplify does support updates to GSIs: if a GSI has changed, the deployment deletes the index and recreates it. In addition, customers can batch more than one GSI update in a single schema push. That said, you're right that our recommendation for large schemas, or for tables with significant amounts of data, is to do it incrementally by deleting the index, pushing, then re-adding a new index with the updated sort key fields. The reason for this is that the time required to perform multiple GSI updates can exceed the Amplify CLI timeout or the expiration of the local credentials. In that case, we recommend doing an In a test I ran locally with a small schema and very short-lived credentials:
Based on your report...
...it sounds like an iterative rollback isn't an option? I'm curious to know what CloudFormation actually thinks about the state of the current resources. Can you provide the results of a stack drift detection on the model stack that actually owns the DynamoDB table and GSI? To find that:
|
Hi @palpatim thanks for the response. Yes, I am running 12.11.1:
I've tried the iterative rollback, and it fails with the same error noted in my support ticket:
I have performed the drift detection several times. It currently states that there is no drift for either table which is what led me to the assumption that Amplify is de-synced from the CloudFormation template. Results are attached. It had said there was drift when I manually reverted the table indices back to the last successful Amplify push ( Should we commence with communicating here and not in the AWS support ticket? |
Thank you for sharing the drift detection results. Also, I apologize for the forked communication -- I default to GitHub but am happy to communicate either here or via the support ticket! I'll post some questions here and duplicate them in the ticket, so please feel free to respond in whichever channel you prefer. Since I haven't been able to reproduce this locally, I want to summarize the current state:
I'm assuming Next step would be to do as you mentioned above: make Amplify reflect the current state of CloudFormation. We'll do that by editing the deployment state of the S3 bucket, which is a bit tedious, but relatively straightforward to do. I'll prepare some instructions for making those edits. |
@palpatim of course and no worries, I'll keep responding here.
Your summary of the current state appears to be correct. I did not take perfect notes as to what caused us to get into this state, unfortunately. Knock on wood it doesn't happen again, but if it does, I'll be more methodical in tracking the steps that resulted in this de-synced state.
That's correct.
Perfect, I am unfamiliar with that process but had conjectured that would be necessary for resolution. I'll await your instructions! |
@palpatim if you could send those instructions today, if possible, that would be very much appreciated. We've been blocked on making production pushes for 3 weeks now, and the pressure is mounting. I'm sure you understand the situation we're in. Having this resolved over the weekend is our priority. Please let me know if there is anything I can do to speed this up or provide additional, necessary information |
Sorry for the delay on this -- I am putting together instructions by using a local environment that attempts to recreate your starting conditions (DDB & CFN reflecting the new index name; Amplify state reflecting the old index name), but I'm finding that as long as I've created the index names properly in DDB/CFN, then I am able to update the name in my Amplify schema and let The error message, Let's try comparing the CFN and Amplify stack definitions to see where the discrepancy is:
Next, let's get Amplify's view of the stack:
What I'm expecting to find is only differences related to the GSI's
My speculation is that there are other differences in either the Once we narrow down the difference in the |
Sorry, it appears the version of the schema uploaded to the support ticket is different from the one you're working from--the schema version I am working from doesn't even have a In any case, the bottom line is: We need to make your local schema.graphql look like the CloudFormation template before we do an The steps I propose:
After the push succeeds, you should be in sync, and can make changes to indexes. Given the initial problems, I'd recommend making changes incrementally, with a
Based on the fact that my schema version is not the same as yours, I agree with this assessment -- it's likely the range key changes that are causing the issue. |
Got it. This all makes sense. I have definitely tried a deployment with these fields specified as you state, but also with a few other changes that may have gotten in the way. It's good to isolate it to the smallest set of changes we can perform at once. In a few hours when our production traffic dies down, I will perform the steps to both the |
I should have noted above -- you can verify the CFN template that Amplify generates before the push by issuing |
I've attempted the proposed solution and am encountering a strange behavior that makes me think we may need to modify something in the s3 deployment bucket? Upon running the
Notice the RANGE key is Any idea why the deployment isn't using the updated configuration? EDIT: I've tried with |
Okay I was able to get an In a nutshell, this enabled the state of Amplify to "sync" up with the state of CFN. The question remains as to what caused this de-sync, but at least I know how this process can be solved in the future. Leaving this ticket open to perform further investigation and to make sure I can make the necessary forward-progress updates. |
The rest of the updates to the application succeeded. I've read up that sometimes making schema changes during a local |
@matt-at-allera I'm glad you were able to complete the updates. Unfortunately we've not been able to identify repro steps for this either. We have seen intermittent reports of this happening, but have been unable to repro using the affected schemas and various methods to interrupt the deployment (e.g., severing network connections, using expired credentials). In the cases where we do encounter a deployment error, an I haven't tried doing a schema modification alongside a push, but in theory it might cause |
@palpatim thanks for the help on this one. I'm going to go ahead and close out the issue. |
This issue is now closed. Comments on closed issues are hard for our team to see. |
@palpatim Something simialar just happened to me. I had a failed push. I had added three indexes to a table and made some other changes, but the push errored out because a lambda did not have a needed security group value in the team provider. Amplify did not roll back the table changes, and Looking in Cloudformation it showed the stack as having the added indexes and no drift One thing to note is that this was an iterative deployment, and it did not fail until step 4/4 |
Before opening, please confirm:
JavaScript Framework
React
Amplify APIs
GraphQL API
Amplify Version
v5
Amplify Categories
No response
Backend
Amplify CLI
Environment information
Describe the bug
A schema.graphql change was made where the
sortKeyFields
of two@index
s were updated:Before:
After:
I understand that this is not best practice, as an index with a new name should be created to support the new sort key field. However, during deployment, it appears that the underlying GSIs for the DynamoDB tables were updated to have the new sort key field. The update to the nested
api
stack failed, presumably because the app sync resolvers did not like the change. However, upon rollback the updated indices were NOT successfully updated, and the respective CloudFormation templates still have thesupplierSiteId
as the range key.If I do an
amplify pull
on the environment, the schema contains the original@index
configurations.I attempted to resolve this manually by deleting the indices and creating them with the old sort key fields. However, all that did was create drift in the CloudFormation stack.
I need a mechanism synchronize what exists in the cloudformation template with what Amplify has stored for these two indices.
Expected behavior
I would expect initial push to fail as Amplify should not support only changing the
sortKeyFields
on indices.Reproduction steps
@model
to the schema.graphql file with an index with a sortKeyFieldCode Snippet
// Put your code below this line.
Log output
aws-exports.js
No response
Manual configuration
No response
Additional configuration
No response
Mobile Device
No response
Mobile Operating System
No response
Mobile Browser
No response
Mobile Browser Version
No response
Additional information and screenshots
No response
The text was updated successfully, but these errors were encountered: