Skip to content
This repository has been archived by the owner on Jan 23, 2024. It is now read-only.

DORA metrics for teams working on microservice-architecture-enabled systems #472

Open
gintautassulskus-elsevier opened this issue Nov 27, 2023 · 0 comments

Comments

@gintautassulskus-elsevier
Copy link

gintautassulskus-elsevier commented Nov 27, 2023

Hi, I am unsure where best to ask the question below, so I am posting it here.

Is it correct that the fourkeys implementation assumes a single-team-to-single-codebase setup?

How do you compute DORA metrics for teams working on microservice-architecture-enabled systems? These systems typically introduce a mesh of teams and codebases. A number of factors can then skew the deployment frequency metric. First, the team's contributions may vary from codebase to another, giving the impression that the team's throughput has decreased. Second, multiple teams contributing to the same codebase would contribute to each other's deployment frequency metric stats.

Does the notion of teams have to be introduced into the computation for accuracy? Is there a simpler alternative to that?

One suggested approach is to track "issue" completion frequency at the issue tracker level. Here, the definition of done is "in production". Issue trackers like Jira are well suited for this purpose as they organise items at team-level sprint boards. On the one hand, I am concerned that this tracks a fundamentally different metric - a frequency of requirements delivery, not code as per the DORA definition. On the other hand, such a metric would still correlate with engineering practice maturity and would lead to deficiencies thereof. What do you think?

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant