You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The XBDD deploy I have contact with receives builds from a few dozen projects and its Mongo store is currently growing by multiple GB per day. It blew up yesterday when it ran the 100GB database partition out of space, and at the current rate of growth will run its new 300GB partition out of space some time this quarter. (Granted, broad uptake of the product is a nice issue to have).
The majority of space is taken up by attached screenshots, rather than the results themselves.
Suggested change to keep the infrastructure requirement in check:
One of the main benefits we derive from XBDD is easy traceabilty from the requirement (cucumber scenario) to a test result. To realize this benefit we really only need the test results for one (or a small number of) builds per product version - usually the final build or builds before release of that version. The series of beta builds leading up to the final one are useful short term, but do not need to be stored indefinitely.
What I suggest is:
Implement functionality in XBDD for removing builds older than a threshold (1 week / 1 month?)
Expose that functionality so that it can be triggered externally (from a cron job or a http call from a Bamboo job or whatever works for an individual deployment once this is needed)
Implement a flag to make chosen builds safe (either couple to the existing 'pinned' functionality or introduce something similar) - safe builds will never be removed regardless of age.
A softer approach could leave the builds but drop attachments. This would require implementing some error handling to deal with the broken image links in the UI. Need to consider if we want to treat uploaded results & attachments differently from manually entered results & attachments (I suggest not).
Making the time threshold for cleanup a parameter on the triggering call would be sensible since that avoids having to hard code an arbitrary time period and will allow deployments to fine tune cleanup to their constraints.
The text was updated successfully, but these errors were encountered:
Issue
The XBDD deploy I have contact with receives builds from a few dozen projects and its Mongo store is currently growing by multiple GB per day. It blew up yesterday when it ran the 100GB database partition out of space, and at the current rate of growth will run its new 300GB partition out of space some time this quarter. (Granted, broad uptake of the product is a nice issue to have).
The majority of space is taken up by attached screenshots, rather than the results themselves.
Suggested change to keep the infrastructure requirement in check:
One of the main benefits we derive from XBDD is easy traceabilty from the requirement (cucumber scenario) to a test result. To realize this benefit we really only need the test results for one (or a small number of) builds per product version - usually the final build or builds before release of that version. The series of beta builds leading up to the final one are useful short term, but do not need to be stored indefinitely.
What I suggest is:
A softer approach could leave the builds but drop attachments. This would require implementing some error handling to deal with the broken image links in the UI. Need to consider if we want to treat uploaded results & attachments differently from manually entered results & attachments (I suggest not).
Making the time threshold for cleanup a parameter on the triggering call would be sensible since that avoids having to hard code an arbitrary time period and will allow deployments to fine tune cleanup to their constraints.
The text was updated successfully, but these errors were encountered: