-
Notifications
You must be signed in to change notification settings - Fork 35
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Large changeset tables cause website stability issues #334
Comments
Publish recursive uses change set internally. silverstripe-versioned/src/RecursivePublishable.php Lines 76 to 88 in 165913a
I tried truncating the ChangeSet table to see what happens. The CMS "Publish" action starting throwing errors in my face. After a couple "failed publish", the ChangeSet table put itself back in a state where pages could be published again. This would imply that at least the latest ChangeSet is required for doing a recursive publish ... although I can't tell you why. Beyond that, it's not clear what the value of keeping those ChangeSet record in the database is. It looks like once upon on time, someone intended ChangeSet to be "revertable". silverstripe-versioned/src/ChangeSet.php Lines 431 to 443 in 165913a
|
Are the dev/build issues triggered when models are added/removed? I’ve had performance issues in the past when the enum values were updated for |
From an Operations perspective Reference: silverstripe/silverstripe-framework#9966 However there are other issues related to large tables / datasets (e.g. Snapshots / Backups / Database Maintenance / etc) which could also be improved through approaches such as "Garbage Collection". |
dev/build can run into issues on active sites where implicit
ChangeSet
andChangeSetItem
tables can run into tens of millions of rows. Notably this is all background data, very few customers use the "campaigns" feature directly (withChangeSet.IsInferred=0
).Option 1: Allow disabling of changeset writing (unclear if this is possible without rearchitecting, and whether we want to support that variation)
Option 2: Garbage collection of those records
Option 3: Fix dev/build failure cases (maybe around indexes?)
The text was updated successfully, but these errors were encountered: