Skip to content

Development Process

Rob Moffat edited this page Oct 5, 2018 · 38 revisions

For Review

In the previous section we looked at a simple model for risks on any given activity.

Now, let's look at the everyday process of developing a new feature on a software project, and see how our risk model informs it.

An Example Process

Let's ignore for now the specifics of what methodology is being used - we'll come to that later. Let's say your team have settled for a process something like the following:

  1. Specification: A new feature is requested somehow, and a business analyst works to specify it.
  2. Code And Unit Test: A developer writes some code, and some unit tests.
  3. Integration: They integrate their code into the code base.
  4. UAT: They put the code into a User Acceptance Test (UAT) environment, and user(s) test it.

... All being well, the code is released to production.

Now, it might be waterfall, it might be agile, we're not going to commit to specifics at this stage. It's probably not perfect, but let's just assume that it works for this project and everyone is reasonably happy with it.

I'm not saying this is the right process, or even a good process: you could add code review, a pilot, integration testing, whatever. We're just doing some analysis of what process gives us.

A Simple Development Process

What's happening here? Why these steps?

Minimizing Risks - Overview

I am going to argue that this entire process is informed by software risk:

  1. We have a business analyst who talks to users and fleshes out the details of the feature properly. This is to minimize the risk of building the wrong thing.
  2. We write unit tests to minimize the risk that our code isn't doing what we expected, and that it matches the specifications.
  3. We integrate our code to minimize the risk that it's inconsistent with the other, existing code on the project.
  4. We have acceptance testing and quality gates generally to minimize the risk of breaking production, somehow.

We could skip all those steps above and just do this:

  1. Developer gets wind of new idea from user, logs onto production and changes some code directly.

A Dangerous Development Process

We can all see this would be a disaster, but why?

Two reasons:

  1. You're meeting reality all-in-one-go: all of these risks materialize at the same time, and you have to deal with them all at once.
  2. Because of this, at the point you put code into the hands of your users, your Internal Model is at its least-developed. All the Hidden Risks now need to be dealt with at the same time, in production.

Applying the Model

Let's look at how our process should act to prevent these risks materializing by considering an unhappy path, one where at the outset, we have lots of Hidden Risks ready to materialize. Let's say a particularly vocal user rings up someone in the office and asks for new Feature X to be added to the software. It's logged as a new feature request, but:

  • Unfortunately, this feature once programmed will break an existing Feature Y.
  • Implementing the feature will use some api in a library, which contains bugs and have to be coded around.
  • It's going to get misunderstood by the developer too, who is new on the project and doesn't understand how the software is used.
  • Actually, this functionality is mainly served by Feature Z...
  • which is already there but hard to find.

Development Process - Hidden Risks

This is a slightly contrived example, as you'll see. But let's follow our feature through the process and see how it meets reality slowly, and the hidden risks are discovered:

Specification

The first stage of the journey for the feature is that it meets the Business Analyst (BA). The purpose of the BA is to examine new goals for the project and try to integrate them with reality as they understands it. A good BA might take a feature request and vet it against the internal logic of the project, saying something like:

  • "This feature doesn't belong on the User screen, it belongs on the New Account screen"
  • "90% of this functionality is already present in the Document Merge Process"
  • "We need a control on the form that allows the user to select between Internal and External projects"

In the process of doing this, the BA is turning the simple feature request idea into a more consistent, well-explained specification or requirement which the developer can pick up. But why is this a useful step in our simple methodology? From the perspective of our Internal Model, we can say that the BA is responsible for:

BA Specification: exposing hidden risks as soon as possible

In surfacing these risks, there is another outcome: while Feature X might be flawed as originally presented, the BA can "evolve" it into a specification, and tie it down sufficiently to reduce the risks. The BA does all this by simply thinking about it, talking to people and writing stuff down.

This process of evolving the feature request into a requirement is the BAs job. From our Risk-First perspective, it is taking an idea and making it meet reality. Not the full reality of production (yet), but something more limited.

Code And Unit Test

The next stage for our feature, Feature X (Specification) is that it gets coded and some tests get written. Let's look at how our goal in mind meets a new reality: this time it's the reality of a pre-existing codebase, which has it's own internal logic.

As the developer begins coding the feature in the software, she will start with an Internal Model of the software, and how the code fits into it. But, in the process of implementing it, she is likely to learn about the codebase, and her Internal Model will develop.

Coding Process:  exposing more hidden risks as you code

A couple of things about this diagram:

  • In boxes, we are showing Risks, which exist within Internal Models, whereas:
  • Beneath them, we are showing actual physical artifacts which exist in the real world.
  • Actions might meet reality, but they are changing reality too, by producing these artifacts.

Integration

Integration is where we run all the tests on the project, and compile all the code in a clean environment: the "reality" of the development environment can vary from one developer's machine to another.

So, this stage is about the developer's committed code meeting a new reality: the clean build.

At this stage, we might discover the Hidden Risk that we'd break Feature Y

Integration testing exposes hidden risks before you get to production

UAT

Is where our feature meets another reality: actual users. I think you can see how the process works by now. We're just flushing out yet more Hidden Risks:

UAT - putting tame users in front of your software is better than real ones, where the risk is higher

  • [Taking Action](Glossary#taking action) is the only way to create change in the world.
  • It's also the only way we can learn about the world, adding to our Internal Model. In this case, we discover the user's difficulty in finding the feature.

Observations

A couple of things:

First, the people setting up the development process didn't know about these exact risks, but they knew the shape that the risks take. The process builds "nets" for the different kinds of hidden risks without knowing exactly what they are. Part of the purpose of this site is to help with this and try and provide a taxonomy for different types of risks.

Second, are these really risks, or are they problems we just didn't know about? I am using the terms interchangeably, to a certain extent. Even when you know you have a problem, it's still a risk to your deadline until it's solved. So, when does a risk become a problem? Is a problem still just a schedule-risk, or cost-risk? It's pretty hard to draw a line and say exactly.

Third, the real take-away from this is that all these risks exist because we don't know 100% how reality is. Risk exists because we don't (and can't) have a perfect view of the universe and how it'll develop. Reality is reality, the risks just exist in our head.

Fourth, hopefully you can see from the above that really all this work is risk management, and all work is testing ideas against reality.

Conclusion?

Could it be that everything you do on a software project is risk management? This is an idea explored in the next section.

Clone this wiki locally