You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Jan 23, 2024. It is now read-only.
Hi, I am unsure where best to ask the question below, so I am posting it here.
Is it correct that the fourkeys implementation assumes a single-team-to-single-codebase setup?
How do you compute DORA metrics for teams working on microservice-architecture-enabled systems? These systems typically introduce a mesh of teams and codebases. A number of factors can then skew the deployment frequency metric. First, the team's contributions may vary from codebase to another, giving the impression that the team's throughput has decreased. Second, multiple teams contributing to the same codebase would contribute to each other's deployment frequency metric stats.
Does the notion of teams have to be introduced into the computation for accuracy? Is there a simpler alternative to that?
One suggested approach is to track "issue" completion frequency at the issue tracker level. Here, the definition of done is "in production". Issue trackers like Jira are well suited for this purpose as they organise items at team-level sprint boards. On the one hand, I am concerned that this tracks a fundamentally different metric - a frequency of requirements delivery, not code as per the DORA definition. On the other hand, such a metric would still correlate with engineering practice maturity and would lead to deficiencies thereof. What do you think?
The text was updated successfully, but these errors were encountered:
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Hi, I am unsure where best to ask the question below, so I am posting it here.
Is it correct that the fourkeys implementation assumes a single-team-to-single-codebase setup?
How do you compute DORA metrics for teams working on microservice-architecture-enabled systems? These systems typically introduce a mesh of teams and codebases. A number of factors can then skew the deployment frequency metric. First, the team's contributions may vary from codebase to another, giving the impression that the team's throughput has decreased. Second, multiple teams contributing to the same codebase would contribute to each other's deployment frequency metric stats.
Does the notion of teams have to be introduced into the computation for accuracy? Is there a simpler alternative to that?
One suggested approach is to track "issue" completion frequency at the issue tracker level. Here, the definition of done is "in production". Issue trackers like Jira are well suited for this purpose as they organise items at team-level sprint boards. On the one hand, I am concerned that this tracks a fundamentally different metric - a frequency of requirements delivery, not code as per the DORA definition. On the other hand, such a metric would still correlate with engineering practice maturity and would lead to deficiencies thereof. What do you think?
The text was updated successfully, but these errors were encountered: