diff --git a/docs/bets/Coding-Bets.md b/docs/bets/Coding-Bets.md
index 5836654a5..015e2607c 100644
--- a/docs/bets/Coding-Bets.md
+++ b/docs/bets/Coding-Bets.md
@@ -25,11 +25,11 @@ So let's look at some examples...
##### "Making our codebase easier to reason about is worth the outlay of time."
-[Complexity Risk](../risks/Complexity-Risk.md) is the risk of your project failing due to the weight of complexity in the codebase, and its resistance to change and comprehension. Fred Brooks' calls this mode of failure _the tar pit_:
+[Complexity Risk](/tags/Complexity-Risk) is the risk of your project failing due to the weight of complexity in the codebase, and its resistance to change and comprehension. Fred Brooks' calls this mode of failure _the tar pit_:
> "Large and small, massive or wiry, team after team has become entangled in the tar. No one thing seems to cause the difficulty - any particular paw can be pulled away. But the accumulation of simultaneous and interacting factors brings slower and slower motion." - [Frederick P. Brooks, _The Mythical Man-Month_](https://www.amazon.com/Mythical-Man-Month-Software-Engineering-Anniversary/dp/0201835959)
-Refactoring is the notion that we can escape the tar pit by making our codebase simpler: If _project agility_ is some function of [Complexity Risk](../risks/Complexity-Risk.md) and your team's talent, the bet here is that you can trade some time _now_ on to move to a place of lower [Complexity Risk](../risks/Complexity-Risk.md), making it easier for the developers to _get stuff done_ in the future.
+Refactoring is the notion that we can escape the tar pit by making our codebase simpler: If _project agility_ is some function of [Complexity Risk](/tags/Complexity-Risk) and your team's talent, the bet here is that you can trade some time _now_ on to move to a place of lower [Complexity Risk](/tags/Complexity-Risk), making it easier for the developers to _get stuff done_ in the future.
Refactoring requires that you have some _simplifying realisation_:
@@ -45,7 +45,7 @@ It looks like this:
**When you win** the codebase becomes easier to think about, and you delay the tar-pit.
-**When you lose** the [Complexity Risk](../risks/Complexity-Risk.md) improvement is less than you hoped, it takes longer than expected, or the _simplifying realisation_ doesn't pan out and you've lost a week.
+**When you lose** the [Complexity Risk](/tags/Complexity-Risk) improvement is less than you hoped, it takes longer than expected, or the _simplifying realisation_ doesn't pan out and you've lost a week.
## Spike Solutions: A New Technology Bet
@@ -57,7 +57,7 @@ You might want to use a Spike Solution to test out replacing a badly-fitting tec
> "Let's explore using [ElasticSearch](https://en.wikipedia.org/wiki/Elasticsearch) for searching instead of SQL Statements."
-Alternatively, someone will suggest using an existing technology to eradicate lots of home-grown code. Devoting parts of your code-base to solving problems that are already solved elsewhere is a source of [Complexity Risk](../risks/Complexity-Risk.md), because that code needs maintaining.
+Alternatively, someone will suggest using an existing technology to eradicate lots of home-grown code. Devoting parts of your code-base to solving problems that are already solved elsewhere is a source of [Complexity Risk](/tags/Complexity-Risk), because that code needs maintaining.
> "Let's throw away all these scripts and start using [Kubernetes](https://en.wikipedia.org/wiki/Kubernetes) to manage our components."
@@ -87,13 +87,13 @@ Often you get user-stories like these:
> "We need a global search because people spend too much time menu-diving."
-New features might help sell your software to new markets and please existing power users. But too many features confuse users, obscuring the essential purpose of the software. This is [Conceptual Integrity Risk](../risks/Feature-Risk.md#conceptual-integrity-risk) - trying to please everyone means you please no-one.
+New features might help sell your software to new markets and please existing power users. But too many features confuse users, obscuring the essential purpose of the software. This is [Conceptual Integrity Risk](/tags/Conceptual-Integrity-Risk) - trying to please everyone means you please no-one.
![Stake and Reward for Adding New Features](/img/generated/practices/coding/new-feature.png)
**When you win** existing users welcome the change with open arms and maybe new markets open up.
-**When you lose** the feature is just a diversion from the main purpose of the project, or it makes little impact. It might be used often enough to remain, but adds [Complexity Risk](../risks/Complexity-Risk.md) to the codebase. A worse scenario is that excessive features confuse the user-base and lead to dissatisfaction.
+**When you lose** the feature is just a diversion from the main purpose of the project, or it makes little impact. It might be used often enough to remain, but adds [Complexity Risk](/tags/Complexity-Risk) to the codebase. A worse scenario is that excessive features confuse the user-base and lead to dissatisfaction.
**Reduce the stakes by:**
- Thoroughly triaging new features.
@@ -132,12 +132,12 @@ The idea here is to make a bet that a market exists for a certain product, _and
We're used to the idea of entrepreneurs taking risks on new business ideas (like in the MVP example, above). But it's not really so different when you are developing in a team, or on a personal project. So if you start by taking the view that every piece of work you do is a bet then it really helps to put into perspective what is at stake and what is to gain.
-The best gamblers (the ones who win over time) don't necessarily take bets they'll always win. But they are always judging risk, stake and reward. They try to place bets where the [Balance of Risk](../thinking/Glossary.md#balance-of-risk) is in their favour. As developers, we should adopt the same mind-set:
+The best gamblers (the ones who win over time) don't necessarily take bets they'll always win. But they are always judging risk, stake and reward. They try to place bets where the [Balance of Risk](/thinking/Glossary.md#balance-of-risk) is in their favour. As developers, we should adopt the same mind-set:
- What are the likely stakes?
- - What is the [Payoff](../thinking/Glossary.md#payoff)?
+ - What is the [Payoff](/thinking/Glossary.md#payoff)?
- What are the odds?
- - Is the bet worth it? Do the stakes justify the [Payoff](../thinking/Glossary.md#payoff)?
+ - Is the bet worth it? Do the stakes justify the [Payoff](/thinking/Glossary.md#payoff)?
- How can you maximise the stakes while minimising pay-off? How long will it take for the pay-off to be worthwhile?
- Are you making a long bet, or lots of small, short bets? You can reduce the overall stakes by splitting work up and doing the riskiest part first.
diff --git a/docs/bets/Debugging-Bets.md b/docs/bets/Debugging-Bets.md
index dc30397d6..90c127f70 100644
--- a/docs/bets/Debugging-Bets.md
+++ b/docs/bets/Debugging-Bets.md
@@ -22,7 +22,7 @@ Then, in [Coding Bets](Coding-Bets.md) we considered the same thing at task leve
Now, we’re going to consider the exact same thing again but from the point of view of debugging. I’ve been waiting a while to write this, because I’ve wanted a really interesting bug to come along to allow me to go over how you can apply risk to cracking it.
-Luckily one came along today, giving me a chance to write it up and go over this. If you've not looked at Risk-First articles before, you may want to review [Risk-First Diagrams Explained](../thinking/Risk-First-Diagrams.md), since there'll be lots of diagrams to demonstrate the bets I'm making.
+Luckily one came along today, giving me a chance to write it up and go over this. If you've not looked at Risk-First articles before, you may want to review [Risk-First Diagrams Explained](/thinking/Risk-First-Diagrams.md), since there'll be lots of diagrams to demonstrate the bets I'm making.
## The Problem
@@ -126,7 +126,7 @@ Sadly, this meant that I’d actually had to test and rule out _all of the other
## Some Notes
-1. I started by writing down all the things I knew, and all of my hypotheses. Why? Surely, time was short! I did this _because_ time was short. The reason was, by having all of the facts and hypotheses to hand I was setting up my [Internal Model](../thinking/Glossary.md#internal-model) of the problem, with which I could reason about the new information as I came across it.
+1. I started by writing down all the things I knew, and all of my hypotheses. Why? Surely, time was short! I did this _because_ time was short. The reason was, by having all of the facts and hypotheses to hand I was setting up my [Internal Model](/thinking/Glossary.md#internal-model) of the problem, with which I could reason about the new information as I came across it.
2. I performed four tests, and ended up ruling out six different hypotheses. That feels like good value-for-time.
3. In each case, I am trading _time_ to change the risk profile of the problem. By reducing to zero the likelihood of some risks, I am increasing the likelihood of those left. So a good test would:
- a. Bisect probability space 50/50. That way the information is maximised.
diff --git a/docs/bets/Purpose-Development-Team.md b/docs/bets/Purpose-Development-Team.md
index b02d53775..d92eb7e4a 100644
--- a/docs/bets/Purpose-Development-Team.md
+++ b/docs/bets/Purpose-Development-Team.md
@@ -40,7 +40,7 @@ Scrum's rule about working-to-a-sprint is well-meaning but not always applicable
## Case 3: Technical Debt
-Sometimes, I am faced with a conflict over whether to pay off [technical debt](../risks/Complexity-Risk.md#technical-debt) or build new functionality. Sometimes the conflict will be with people in my team, or with stake-holders but sometimes it is an internal, personal conflict.
+Sometimes, I am faced with a conflict over whether to pay off [technical debt](/risks/Complexity-Risk.md#technical-debt) or build new functionality. Sometimes the conflict will be with people in my team, or with stake-holders but sometimes it is an internal, personal conflict.
![Technical Debt vs Building Features](/img/generated/practices/purpose/technical-debt.png)
@@ -68,9 +68,9 @@ So, above I’ve given several cases of contradictory tensions within developmen
But could there be a “general theory” somehow that avoids these contradictions? What would it look like? I am going to suggest one here:
-> "The purpose of the development team is to improve the [balance of risk](../thinking/Glossary.md#balance-of-risk) for achieving business goals as much as possible."
+> "The purpose of the development team is to improve the [balance of risk](/thinking/Glossary.md#balance-of-risk) for achieving business goals as much as possible."
-Now clearly, the troublesome clause in this statement is “[balance of risk](../thinking/Glossary.md#balance-of-risk)”. So, before we apply this to the cases above, let’s explain this concept in some detail by exploring three toy examples: the roulette table, buying stocks and cycling to work. Then we'll see how this impacts the work we do in software development more generally.
+Now clearly, the troublesome clause in this statement is “[balance of risk](/thinking/Glossary.md#balance-of-risk)”. So, before we apply this to the cases above, let’s explain this concept in some detail by exploring three toy examples: the roulette table, buying stocks and cycling to work. Then we'll see how this impacts the work we do in software development more generally.
## Example 1: The Roulette Table
@@ -81,7 +81,7 @@ Let’s talk about “risk” for a bit. First, we’re going to consider the g
The above chart shows the distribution of returns for this bet. Which hole the ball lands in (entirely randomly) is the independent variable on the x-axis. The return is on the y-axis. Most of the time, it’s a small loss, but there’s that one big win on the 12. (For clarity, in all the charts, I’ve arranged the x-axis in order of “worst outcome” to “best outcome”, but it doesn’t necessarily have to be arranged like this.)
-In roulette, then, the [balance of risk](../thinking/Glossary.md#balance-of-risk) is against us: if we integrate to find the area under this chart, it comes to -1 chips. You could get lucky, but over time the house wins. It’s (fairly) transparent that this is the case when you enter the game, so people are clearly not playing roulette with the rational goal of maximising chips.
+In roulette, then, the [balance of risk](/thinking/Glossary.md#balance-of-risk) is against us: if we integrate to find the area under this chart, it comes to -1 chips. You could get lucky, but over time the house wins. It’s (fairly) transparent that this is the case when you enter the game, so people are clearly not playing roulette with the rational goal of maximising chips.
## Example 2: Buying Stocks
@@ -93,11 +93,11 @@ First, a roulette table presents us with a set of very discrete outcomes. Real
The chart above (from [William T Ziemba](https://www.williamtziemba.com)) shows the returns-per-quarter of Ford and Berkshire Hathaway stocks over a number of years, with worst-performing quarters on the left and best-performing on the right.
-Second, while you know ahead-of-time the chances of winning at roulette, you can only guess at the [balance of risk](../thinking/Glossary.md#balance-of-risk) for owning Berkshire Hathaway stock for the next quarter, even if you are armed with the above chart. Generally, owning shares has a net-positive [balance of risk](../thinking/Glossary.md#balance-of-risk): on average you're more likely to make money than lose money, but it's not guaranteed - past performance is no indication of future performance.
+Second, while you know ahead-of-time the chances of winning at roulette, you can only guess at the [balance of risk](/thinking/Glossary.md#balance-of-risk) for owning Berkshire Hathaway stock for the next quarter, even if you are armed with the above chart. Generally, owning shares has a net-positive [balance of risk](/thinking/Glossary.md#balance-of-risk): on average you're more likely to make money than lose money, but it's not guaranteed - past performance is no indication of future performance.
Another question relating to this graph might be: which firm is generating the most value? Certainly, the area under the Berkshire Hathaway curve is larger but there is a bigger downside too. Is it possible that Berkshire Hathaway generates more value while taking on more risk?
-When we consider buying a stock, we are going to build a model of the [balance of risks](../thinking/Glossary.md#balance-of-risk) (perhaps on a spreadsheet, or in our heads). This will be dependent on our own preferences and experience (our [Internal Model](../thinking/Glossary.md#internal-model) if you will).
+When we consider buying a stock, we are going to build a model of the [balance of risks](/thinking/Glossary.md#balance-of-risk) (perhaps on a spreadsheet, or in our heads). This will be dependent on our own preferences and experience (our [Internal Model](/thinking/Glossary.md#internal-model) if you will).
## Example 3: Cycling To Work
@@ -105,7 +105,7 @@ Gambling is all about winning _chips_, and buying stock is all about winning _mo
![Cycling To Work: Distributions of Returns - Time and Health](/img/numbers/cycling-to-work.png)
-In the above chart, we have two risk profiles for cycling to work. On the left, we have the time taken. After a few week's cycling, we can probably start to build up a good [Internal Model](../thinking/Glossary.md#internal-model) of what this distribution looks like.
+In the above chart, we have two risk profiles for cycling to work. On the left, we have the time taken. After a few week's cycling, we can probably start to build up a good [Internal Model](/thinking/Glossary.md#internal-model) of what this distribution looks like.
On the right, we have _health_. There _isn't_ a good objective measure for this. We might look at our weight, or resting heart-rate or something, or just generally have a good feeling that cycling is making us fitter. Also, there's probably a worry about having an accident built into this (the steep drop on the left), and again, there is no objective measure for judging how badly that might come off.
@@ -117,32 +117,32 @@ So we have three issues with health:
## Back To Software
-So, we've gone from the Roulette Table example where the whole risk profile is completely known in advance to the Cycling example, where the risk profile is hidden from us, and unknowable. Regardless, we will have our own [Internal Model](../thinking/Glossary.md#internal-model) of the balance of risks which we use to make judgement calls.
+So, we've gone from the Roulette Table example where the whole risk profile is completely known in advance to the Cycling example, where the risk profile is hidden from us, and unknowable. Regardless, we will have our own [Internal Model](/thinking/Glossary.md#internal-model) of the balance of risks which we use to make judgement calls.
-Just as a decision over how fast to cycle to work changes the [balance of risk](../thinking/Glossary.md#balance-of-risk), the actions and decisions we make in software development do too.
+Just as a decision over how fast to cycle to work changes the [balance of risk](/thinking/Glossary.md#balance-of-risk), the actions and decisions we make in software development do too.
-The difference is, while the cycling example was chosen to be quite _finely balanced_, in software development we should be looking for actions to take which improve the upside _considerably_ more than they worsen the downside. That is, improving the [balance of risk](../thinking/Glossary.md#balance-of-risk) _as much as possible_.
+The difference is, while the cycling example was chosen to be quite _finely balanced_, in software development we should be looking for actions to take which improve the upside _considerably_ more than they worsen the downside. That is, improving the [balance of risk](/thinking/Glossary.md#balance-of-risk) _as much as possible_.
![Good and Not-So-Good Actions](/img/numbers/good-not-so-good-actions.png)
-This is shown in the above chart. Let's say you have two possible pieces of development, both with a similar downside (maybe they take a similar time to complete and this what is lost if it doesn't work out). However, the action on the left _significantly_ improves the [balance of risk](../thinking/Glossary.md#balance-of-risk) for the project. Therefore, all else being equal, we should take that bet.
+This is shown in the above chart. Let's say you have two possible pieces of development, both with a similar downside (maybe they take a similar time to complete and this what is lost if it doesn't work out). However, the action on the left _significantly_ improves the [balance of risk](/thinking/Glossary.md#balance-of-risk) for the project. Therefore, all else being equal, we should take that bet.
We don't want to just do work that merely shifts us from having one big risk to another, we want to do work that swaps out a large risk for maybe a couple of tiny ones.
Let's go back to our original cases:
- If I decide to **suspend the current sprint** to fix an outage, then that’s because I’ve decided that the risk of lost business, or the damage to reputation is much greater than the risk of customers walking because we didn’t complete the planned features.
-- When the Agile Manifesto stresses **Individuals and Interactions over Processes and Tools**, it’s because it believes focusing on processes and tools leads to much greater risk. This is based on the experience that while focusing on individuals and interactions may appear to be a less efficient way to build software, following strict formal processes massively increases the much worse risk of [building the wrong product](../risks/Feature-Risk.md#feature-fit-risk).
-- When we argue for **fixing technical debt against shipping a new feature**, what we are really doing is expressing differences in our models of the [balance of risk](../thinking/Glossary.md#balance-of-risk) from taking these actions. My boss and I might both be trying to minimise the risk of customers defecting to another product but he might believe this is best achieved by [adding new features](/tags/Feature-Risk) in the short term, whilst I might believe that [clearing technical debt](../risks/Complexity-Risk.md#technical-debt) allows us to get features delivered faster in the long term.
-- In the example of **Sustainably vs Quickly**, it's clear that what we should be doing is trying to avoid altering the balance of risks in a way that sacrifices too much Sustainability or Speed. To do this requires judgement in the form of an accurate [Internal Model](../thinking/Glossary.md#internal-model) of the [balance of risks](../thinking/Glossary.md#balance-of-risk).
+- When the Agile Manifesto stresses **Individuals and Interactions over Processes and Tools**, it’s because it believes focusing on processes and tools leads to much greater risk. This is based on the experience that while focusing on individuals and interactions may appear to be a less efficient way to build software, following strict formal processes massively increases the much worse risk of [building the wrong product](/tags/Feature-Fit-Risk).
+- When we argue for **fixing technical debt against shipping a new feature**, what we are really doing is expressing differences in our models of the [balance of risk](/thinking/Glossary.md#balance-of-risk) from taking these actions. My boss and I might both be trying to minimise the risk of customers defecting to another product but he might believe this is best achieved by [adding new features](/tags/Feature-Risk) in the short term, whilst I might believe that [clearing technical debt](/risks/Complexity-Risk.md#technical-debt) allows us to get features delivered faster in the long term.
+- In the example of **Sustainably vs Quickly**, it's clear that what we should be doing is trying to avoid altering the balance of risks in a way that sacrifices too much Sustainability or Speed. To do this requires judgement in the form of an accurate [Internal Model](/thinking/Glossary.md#internal-model) of the [balance of risks](/thinking/Glossary.md#balance-of-risk).
### Other Scenarios
-In a way, this is not just about development teams. Any time a person is added to an organisation, the hope is that it will improve the [balance of risk](../thinking/Glossary.md#balance-of-risk) for that organisation. The development team are experts in improving the balance of [technical risks](../risks/Risk-Landscape.md) but other teams have other specialities:
+In a way, this is not just about development teams. Any time a person is added to an organisation, the hope is that it will improve the [balance of risk](/thinking/Glossary.md#balance-of-risk) for that organisation. The development team are experts in improving the balance of [technical risks](/risks/Risk-Landscape.md) but other teams have other specialities:
- - The Finance team are there to avoid the risk of [running out of money](../risks/Scarcity-Risk.md#funding-risk) and ensuring that the bills get paid (avoiding [Legal Risks](../risks/Operational-Risk.md)).
- - The Human Resources team are there to make sure staff are hired, managed and leave properly. Doing this avoids [inefficiency](../risks/Scarcity-Risk.md#schedule-risk), [Reputation Damage](../risks/Communication-Risk.md#trust--belief-risk), [Morale Issues](../risks/Agency-Risk.md#morale-failure) and [Legal Risks](../risks/Operational-Risk.md).
- - The best doctors have accurate [Internal Models](../thinking/Glossary.md#internal-model). They can best diagnose the illnesses and figure out treatments that improve the patient's [balance of risk](../thinking/Glossary.md#balance-of-risk). Medical Students are all taught to 'first, do no harm':
+ - The Finance team are there to avoid the risk of [running out of money](/tags/Funding-Risk) and ensuring that the bills get paid (avoiding [Legal Risks](/tags/Operational-Risk)).
+ - The Human Resources team are there to make sure staff are hired, managed and leave properly. Doing this avoids [inefficiency](/tags/Schedule-Risk), [Reputation Damage](/tags/Trust-And-Belief-Risk), [Morale Issues](/risks/Agency-Risk.md#morale-failure) and [Legal Risks](/tags/Operational-Risk).
+ - The best doctors have accurate [Internal Models](/thinking/Glossary.md#internal-model). They can best diagnose the illnesses and figure out treatments that improve the patient's [balance of risk](/thinking/Glossary.md#balance-of-risk). Medical Students are all taught to 'first, do no harm':
> "given an existing problem, it may be better not to do something, or even to do nothing, than to risk causing more harm than good." - [Primum non nocere, _Wikipedia_](https://en.wikipedia.org/wiki/Primum_non_nocere).
@@ -157,7 +157,7 @@ If we were just delivering value, we might not:
- **Build Unit Tests**. After all, these add nothing directly to the customer experience.
- **Keep Backups**. Backups minimise the downside of storage failure.
- **Add log statements**. When things go wrong, these help you to work out why.
-- **Worry about [ACID](https://en.wikipedia.org/wiki/ACID_(computer_science)) transactions.** They slow things down, but they increase [reliability](../risks/Dependency-Risk.md#reliability-risk).
+- **Worry about [ACID](https://en.wikipedia.org/wiki/ACID_(computer_science)) transactions.** They slow things down, but they increase [reliability](/tags/Reliability-Risk).
- **Work to minimise dependencies**. Each dependency carries a risk that it might fail, causing problems in your software.
All of these actions are about _insurance_, which is about limiting downside-risk. None of them are of value _per se_ to the client.
@@ -166,4 +166,4 @@ All of these actions are about _insurance_, which is about limiting downside-ris
If you are faced with a choice between extremes...
-This is just a few simple examples and actually it goes much further than this. In [Estimates](../estimating/Estimates.md) I apply this idea to software estimating, and the next article, [Coding Bets](Coding-Bets.md), I am going to show how knowledge of the [balance of risk](../thinking/Glossary.md#balance-of-risk) concept can inform the way we go about our day-to-day work as developers...
+This is just a few simple examples and actually it goes much further than this. In [Estimates](../estimating/Estimates.md) I apply this idea to software estimating, and the next article, [Coding Bets](Coding-Bets.md), I am going to show how knowledge of the [balance of risk](/thinking/Glossary.md#balance-of-risk) concept can inform the way we go about our day-to-day work as developers...
diff --git a/docs/bets/Start.md b/docs/bets/Start.md
index c26904237..379502c25 100644
--- a/docs/bets/Start.md
+++ b/docs/bets/Start.md
@@ -11,7 +11,7 @@ cat: Bets
tags:
- Front
tweet: yes
-sidebar_position: 4
+sidebar_position: 7
---
# On Bets
diff --git a/docs/bets/_category_.yaml b/docs/bets/_category_.yaml
index cc9a0c7a4..8337c71e3 100644
--- a/docs/bets/_category_.yaml
+++ b/docs/bets/_category_.yaml
@@ -1,4 +1,4 @@
-position: 5
+position: 7
label: 'Bets'
link:
type: doc
diff --git a/docs/books/Risk-First-Second-Edition.md b/docs/books/Risk-First-Second-Edition.md
index 211776f8b..73c3ac214 100644
--- a/docs/books/Risk-First-Second-Edition.md
+++ b/docs/books/Risk-First-Second-Edition.md
@@ -6,7 +6,6 @@ featured:
class: bg1
element: ''
tags:
- - Front
- Books
sidebar_position: 2
---
diff --git a/docs/books/_category_.yaml b/docs/books/_category_.yaml
index c1b3ef048..2b4e469dc 100644
--- a/docs/books/_category_.yaml
+++ b/docs/books/_category_.yaml
@@ -1,4 +1,4 @@
-position: 7
+position: 6
label: 'Books'
link:
type: doc
diff --git a/docs/estimating/Analogies.md b/docs/estimating/Analogies.md
index 206c038f8..8ac3c0110 100644
--- a/docs/estimating/Analogies.md
+++ b/docs/estimating/Analogies.md
@@ -19,11 +19,11 @@ So far, this track of articles has tried to bring the problems of estimating sof
- [Fill-The-Bucket](Fill-The-Bucket.md): This is the easiest domain to work in. All tasks are similar and uncorrelated. We can _extrapolate_ to figure out how much time the next _n_ units will take to do.
- [Kitchen Cabinet](Kitchen-Cabinet.md): In this domain, there is _hidden work_. We don't know how much there might be. If we can break down tasks into smaller units, then by the _law of averages_ and the _central limit theorem_, we can apply some statistics to figure out when we might finish.
- [Journeys](Journeys.md): In this domain, work is heterogeneous and interconnected. Different parts depend on each other, and a failure in one part might mean going back to the drawing board entirely. The way to estimate in this domain is to _know the landscape_ and to build in _buffers_.
-- [Fractals](Fractals.md): In this domain, [Parkinson's Law](../risks/Process-Risk.md#bureaucracy) is king. There is always more work to be done. The best thing we can do is try and apply ourselves to the _highest value_ work at any given point, and frequently refer back to reality to find out if we're building the right thing.
+- [Fractals](Fractals.md): In this domain, [Parkinson's Law](/risks/Process-Risk.md#bureaucracy) is king. There is always more work to be done. The best thing we can do is try and apply ourselves to the _highest value_ work at any given point, and frequently refer back to reality to find out if we're building the right thing.
![Three Dimensions From Fill-The-Bucket](/img/estimates/dimensions.png)
-In Risk-First, one of the main messages has been that it's all about your [Internal Model](../thinking/Glossary.md#internal-model). If you have a good model of the world, then you're likely to be able to [Take Actions](../thinking/Glossary.md#taking-action) in the world that lead you to positions of lower risk.
+In Risk-First, one of the main messages has been that it's all about your [Internal Model](/thinking/Glossary.md#internal-model). If you have a good model of the world, then you're likely to be able to [Take Actions](/thinking/Glossary.md#taking-action) in the world that lead you to positions of lower risk.
So the main reason for identifying all these different problem domains for estimation has been to improve that internal model.
@@ -44,7 +44,7 @@ As we discussed in [Journeys](Journeys.md), there are plenty of problems in gett
![Journeys Meets Cabinets](/img/estimates/dimensions-2.png)
-What happens when you relax those constraints? If there is _no map_ and the _closeness_ heuristic isn't available, you're in a maze. You can't tell how "done" you are in a maze by judging your distance to the exit point - you may be heading to a [Dead End](../risks/Complexity-Risk.md#dead-end-risk) anyway!
+What happens when you relax those constraints? If there is _no map_ and the _closeness_ heuristic isn't available, you're in a maze. You can't tell how "done" you are in a maze by judging your distance to the exit point - you may be heading to a [Dead End](/tags/Dead-End-Risk) anyway!
![Maze Estimating](/img/estimates/mazes.png)
@@ -104,10 +104,6 @@ Turns out, I am not the only person to draw this analogy:
So I find the _transport network_ analogy to be a useful one. But actually it ties in nicely with where this track goes next.
-Maintaining a transport network is a balancing act. In an ideal world, every destination would be connected with every other. In reality, we adopt hub-and-spoke architectures to minimise the cost of maintaining all the connections. In essence, turning our transport network into some kind of _heirarchy_.
+Maintaining a transport network is a balancing act. In an ideal world, every destination would be connected with every other. In reality, we adopt hub-and-spoke architectures to minimise the cost of maintaining all the connections. In essence, turning our transport network into some kind of _hierarchy_.
-If we consider a software system to be a sort of network, then hierarchy turns out to be a crucial tool we can apply to understanding it.
-
-You can look more at the importance of _hierarchies_ in the [On Complexity Track](../complexity/Start.md).
-
-However, if you're here to continue learning about _estimating_, it's time to look at [Fixing Scrum](Fixing-Scrum.md).
+It's time to look at [Fixing Scrum](Fixing-Scrum.md).
diff --git a/docs/estimating/Fill-The-Bucket.md b/docs/estimating/Fill-The-Bucket.md
index c55a4c40c..d914866b7 100644
--- a/docs/estimating/Fill-The-Bucket.md
+++ b/docs/estimating/Fill-The-Bucket.md
@@ -85,7 +85,7 @@ In the above simulation, we are trying to fit a Normal Distribution, estimated f
You should be able to see that when you move from two to three samples, the variance will probably change _a lot_. However moving from twenty to thirty samples means it hardly changes at all.
-This kind of measurement and estimating is the bread-and-butter of all kinds of [Operational Control](../risks/Operational-Risk.md) systems.
+This kind of measurement and estimating is the bread-and-butter of all kinds of [Operational Control](/tags/Operational-Risk) systems.
## Big-O
@@ -125,7 +125,7 @@ There are three charts above:
- The top (red) chart is showing the probability density for us completing the work. Our actual completion time is one point chosen randomly from the area in red. So, we're probably looking at around 32 days.
- The middle (blue) chart shows our return distribution. As you can see, it starts sliding down after 20 days, eventually ending up in negative territory. Leaving the estimate at 20 days gives us the _highest possible_ payout of £10,000, increasing our estimate reduces this maximum.
- - The bottom (orange) chart multiplies these two together to give us a measure of [financial risk](../risks/Scarcity-Risk.md#funding-risk). Without adjusting the estimate, we're more likely to lose than win.
+ - The bottom (orange) chart multiplies these two together to give us a measure of [financial risk](/tags/Funding-Risk). Without adjusting the estimate, we're more likely to lose than win.
Are you a gambler? If you can just make everyone work a couple of extra hours' overtime, you'll be much more likely to make the big bucks. But without cheating like this, it's probably best to give an estimate around 30 days or more.
@@ -133,7 +133,7 @@ Are you a gambler? If you can just make everyone work a couple of extra hours'
This is a really contrived example, but actually this represents _most of_ how banks, insurance companies, investors etc. work out risk, simply multiplying the probability of something happening by what is lost when it does happen. But let's look at some criticisms of this:
-1. Aren't there other options? We might be able to work nights to get the project done, or hire more staff, or give bonuses for overtime _or something_. In fact, in [Pressure](../practices/Pressure.md) we'll look at some of these factors.
+1. Aren't there other options? We might be able to work nights to get the project done, or hire more staff, or give bonuses for overtime _or something_. In fact, in [Pressure](/tags/Pressure.md) we'll look at some of these factors.
2. We've actually got a project here which _degrades gracefully_. The costs of taking longer are clearly sign-posted in advance. In reality, the costs of missing a date might be much more disastrous: not getting your game completed for Christmas, missing a regulatory deadline, not being ready for an important demo - these are all-or-nothing outcomes where it's a [stark contrast between in-time and missing-the-bus](/tags/Deadline-Risk).
@@ -149,6 +149,6 @@ But there are lots of ways [Fill-The-Bucket](Fill-The-Bucket.md) goes wrong, and
2. Each unit is pretty much the same as another.
3. Each unit is _independent_ to the others.
-In [the financial crisis](../risks/Risk-Landscape.md#example-the-financial-crisis), we saw how estimates of risk failed because they violated point 3.
+In [the financial crisis](/risks/Risk-Landscape.md#example-the-financial-crisis), we saw how estimates of risk failed because they violated point 3.
Let's have a look at [what happens when we relax these constraints](Kitchen-Cabinet.md).
\ No newline at end of file
diff --git a/docs/estimating/Fixing-Scrum.md b/docs/estimating/Fixing-Scrum.md
index 7550ff366..754fd948f 100644
--- a/docs/estimating/Fixing-Scrum.md
+++ b/docs/estimating/Fixing-Scrum.md
@@ -1,7 +1,6 @@
---
title: Fixing Scrum
description: "Part of the 'Estimating' Risk-First Track, looking at the essential flaws in Scrums' time-boxing of work."
-
featured:
class: bg1
element: 'Fixing Scrum'
@@ -30,7 +29,7 @@ Work in Scrum is done within periods of time called _Sprints_. Each sprint ends
> "The goal of this activity is to inspect and adapt the product being built... Everyone in attendance gets clear visibility into what is occurring and has an opportunity to help guide the forthcoming development to ensure that the most business-appropriate solution is created." - Essential Scrum (p26), _Rubin_
-In Risk-First, we tend to call this validation step [Meeting Reality](../thinking/Glossary.md#meet-reality): you are creating a [feedback loop](../thinking/Cadence.md) in order to minimise risk. What is the risk you are minimising? Essentially, we are trying to reduce the risk of the developers _building the wrong thing_, which could be due to misunderstanding of requirements, or perfectionism, or because the piece of work was ill-conceived in the first place. In Risk-First, the risk of building the wrong thing is called [Feature Risk](/tags/Feature-Risk).
+In Risk-First, we tend to call this validation step [Meeting Reality](/thinking/Glossary.md#meet-reality): you are creating a [feedback loop](/thinking/Cadence.md) in order to minimise risk. What is the risk you are minimising? Essentially, we are trying to reduce the risk of the developers _building the wrong thing_, which could be due to misunderstanding of requirements, or perfectionism, or because the piece of work was ill-conceived in the first place. In Risk-First, the risk of building the wrong thing is called [Feature Risk](/tags/Feature-Risk).
![Feature Risk mitigated by Meeting Reality](/img/generated/estimating/scrum/scrum1.png)
@@ -38,7 +37,7 @@ The above diagram demonstrates us mitigating [Feature Risk](/tags/Feature-Risk)
![Schedule Risk for Stakeholders](/img/generated/estimating/scrum/scrum2.png)
-And that risk is called [Schedule Risk](../risks/Scarcity-Risk.md#schedule-risk). It is shown in the diagram above: the _more feedback_ you are receiving, the more _interruption_ you are causing to the people giving feedback. So you are trying to [Balance Risk](../bets/Purpose-Development-Team.md): while having a _daily_ review for a software project involving all stakeholders would be over-kill and waste a lot of everyone's time, having a _yearly_ review would be too-long a feedback loop. Balancing risk here means doing the feedback loop _just often enough_.
+And that risk is called [Schedule Risk](/tags/Schedule-Risk). It is shown in the diagram above: the _more feedback_ you are receiving, the more _interruption_ you are causing to the people giving feedback. So you are trying to [Balance Risk](../bets/Purpose-Development-Team.md): while having a _daily_ review for a software project involving all stakeholders would be over-kill and waste a lot of everyone's time, having a _yearly_ review would be too-long a feedback loop. Balancing risk here means doing the feedback loop _just often enough_.
## Time-Boxing To The Rescue
@@ -50,7 +49,7 @@ In order to balance the risks, Sprints are time-boxed: rather than just request
But the problem is that now the developers have to coordinate their work to be ready before the Sprint Review starts. Even for a _development team of one_ it can be a challenge to coordinate like this: often, development is completed a few days early, or incomplete by the day of the demo, so it might be easier to move the meeting.
-As the number of developers in the team grows, the [Coordination Risk](../risks/Coordination-Risk.md) increases: rather than bulls-eye-ing a single feature for demo day, you're now expecting a whole team to do it.
+As the number of developers in the team grows, the [Coordination Risk](/tags/Coordination-Risk) increases: rather than bulls-eye-ing a single feature for demo day, you're now expecting a whole team to do it.
Nevertheless, time-boxing is foundational principle of Scrum. So in order to get time-boxing to work, this means we have to rely on planning and estimating.
@@ -69,7 +68,7 @@ Now, although the above diagram _makes sense_ (estimating as a mitigation to coo
- **Finally, sometimes, you'll have a problem that's like a [Journey](Journeys.md).** Maybe you're trying to set up a new deployment pipeline? The first step, finding servers turned out to be easy, but now you're trying to license the software to run on them, and it's taking longer. The journey you have to take is _known_, but the steps along it are all different. Will you hit the Sprint Review on time? It's super-hard to say.
-Given that estimating is so problematic, does it make any sense to try to mitigate our [Coordination Risk](../risks/Coordination-Risk.md) using estimates?
+Given that estimating is so problematic, does it make any sense to try to mitigate our [Coordination Risk](/tags/Coordination-Risk) using estimates?
##### As a tool for dealing with Coordination Risk, _Estimating_ is an unreliable foot-gun.
@@ -77,7 +76,7 @@ Given that estimating is so problematic, does it make any sense to try to mitiga
![Schedule Risk for Stakeholders](/img/generated/estimating/scrum/scrum2.png)
-Let's go back a step: isn't there another way to tackle the [Schedule Risk](../risks/Scarcity-Risk.md#schedule-risk) problem _instead_ of time-boxing / huge sprint review meeting? As we saw, this led to [Coordination Risk](../risks/Coordination-Risk.md) issues. Here are several other ways you could solve this problem:
+Let's go back a step: isn't there another way to tackle the [Schedule Risk](/tags/Schedule-Risk) problem _instead_ of time-boxing / huge sprint review meeting? As we saw, this led to [Coordination Risk](/tags/Coordination-Risk) issues. Here are several other ways you could solve this problem:
- **Instead of giving live demos of completed features, give video demos of software in progress.** In the film industry, this is called [Dailies or Rushes](https://en.wikipedia.org/wiki/Dailies), because they produce a reel of all the film shot that day. Why? Again - it's about risk management: you're trying to find out if there are any technical issues with the film shot before shooting anything new. Going back and shooting several weeks later can often be impossibly difficult. (A great example being the reshoots of the film "Justice League" involving Henry Cavill, playing Superman. Sadly, when these reshoots were done, he was filming "Mission Impossible: Fallout", and had grown a moustache, which the director of that film, Chris McQuarrie, couldn't let him shave. So Superman's moustache was edited-out using CGI.)
@@ -124,7 +123,7 @@ How can we, as software developers, minimise the chance of building the wrong th
Look above at the diagram what Scrum is trying to do to mitigate [Feature Risk](/tags/Feature-Risk):
-- We [Meet Reality](../thinking/Glossary.md#meet-reality) to ensure we've got a feedback loop.
+- We [Meet Reality](/thinking/Glossary.md#meet-reality) to ensure we've got a feedback loop.
- We **time-box** to avoid wasting stake-holders' time (Schedule Risk).
- We do **planning poker** to try and avoid the Coordination Risk problem of everyone needing to complete their work for the end of the Sprint.
diff --git a/docs/estimating/Fractals.md b/docs/estimating/Fractals.md
index bf65f1b87..2832691d1 100644
--- a/docs/estimating/Fractals.md
+++ b/docs/estimating/Fractals.md
@@ -59,7 +59,7 @@ If your problem doesn't have an exact, defined end-goal, there is simply no way
![Opportunity on the Risk Landscape](/img/estimates/fractal1.png)
-You might have some idea (selling hats for dogs?) of some interesting area of value on the [Risk Landscape](../thinking/Glossary.md#risk-landscape) that you want to occupy, as shown in the above diagram.
+You might have some idea (selling hats for dogs?) of some interesting area of value on the [Risk Landscape](/thinking/Glossary.md#risk-landscape) that you want to occupy, as shown in the above diagram.
Your best bet is to try and colonise the area of value _as fast as possible_ by using as much readily available software as possible.
@@ -69,16 +69,16 @@ Maybe version one looks something like the diagram above: a few hastily-assemble
![Second Version](/img/estimates/fractal3.png)
-Releasing the first version might fill in some of the blanks, and show you more detail on the [Risk Landscape](../thinking/Glossary.md#risk-landscape). Effectively showing you a more detailed view of the coastline. Feedback from users will provide you with a better understanding of exactly what this fractal problem-space looks like.
+Releasing the first version might fill in some of the blanks, and show you more detail on the [Risk Landscape](/thinking/Glossary.md#risk-landscape). Effectively showing you a more detailed view of the coastline. Feedback from users will provide you with a better understanding of exactly what this fractal problem-space looks like.
![Third Version](/img/estimates/fractal4.png)
-As you go on [Meeting Reality](../thinking/Glossary.md#meet-reality), the shape of the problem domain comes into focus, and you're able to _refine_ your solution to match it more exactly.
+As you go on [Meeting Reality](/thinking/Glossary.md#meet-reality), the shape of the problem domain comes into focus, and you're able to _refine_ your solution to match it more exactly.
Is it possible to estimate problems in the Fractal Shape domain? The best you might be able to do is to match two competing objectives:
-- Building Product: By building functionality you head towards your [Goal](../thinking/Glossary.md#goal) on the [Risk Landscape](../thinking/Glossary.md#risk-landscape). But how do you know this is the right goal?
-- [Meeting Reality](../thinking/Glossary.md#meet-reality): By putting your product "out there" you find your customers and your niche in the market, and you explore the [Risk Landscape](../thinking/Glossary.md#risk-landscape). But this takes time and effort away from _building product_.
+- Building Product: By building functionality you head towards your [Goal](/thinking/Glossary.md#goal) on the [Risk Landscape](/thinking/Glossary.md#risk-landscape). But how do you know this is the right goal?
+- [Meeting Reality](/thinking/Glossary.md#meet-reality): By putting your product "out there" you find your customers and your niche in the market, and you explore the [Risk Landscape](/thinking/Glossary.md#risk-landscape). But this takes time and effort away from _building product_.
With this in mind, you estimate a useful amount of time to go round this cycle, fixing the time but letting the deliverable vary.
diff --git a/docs/estimating/Interference-Checklist.md b/docs/estimating/Interference-Checklist.md
index 4e660fe18..2c1a7f7bd 100644
--- a/docs/estimating/Interference-Checklist.md
+++ b/docs/estimating/Interference-Checklist.md
@@ -33,28 +33,28 @@ Download this in [PDF](/estimating/Interference-Checklist.pdf) or [Numbers](/est
| **Area** | **Concern** | **Notes** | **Point Value** |
| -------------------------------------------- | --------------------------------------------------------------------------------- | --------- | --------------- |
| **[Communication Risks](/tags/Communication-Risk)** | | | |
-| **\- [Channel Risk](../risks/Communication-Risk.md#channel-risk)** | Requires input from other team members | | |
+| **\- [Channel Risk](/tags/Channel-Risk)** | Requires input from other team members | | |
| | Requires input from other teams | | |
| | Requires input from other departments | | |
| | Input required, not clear from who, but further up the hierarchy | | |
| | | | |
-| **\- [Protocol Risk](../risks/Communication-Risk.md#protocol-risk)** | Requires agreement on a protocol / data format with another team | | |
+| **\- [Protocol Risk](/tags/Protocol-Risk)** | Requires agreement on a protocol / data format with another team | | |
| | Requires regular meetings | | |
| | | | |
-| **\- [Learning-Curve Risk](../risks/Communication-Risk.md#learning-curve-risk)** | Requires learning an unfamiliar technology / standard | | |
+| **\- [Learning-Curve Risk](/tags/Learning-Curve-Risk)** | Requires learning an unfamiliar technology / standard | | |
| | Untrained staff involved in the delivery | | |
| | | | |
-| **\- [Invisibility Risk](../risks/Communication-Risk.md#invisibility-risk)** | Specifications not available | | |
+| **\- [Invisibility Risk](/tags/Invisibility-Risk)** | Specifications not available | | |
| | Reverse-Engineering Required | | |
| | | | |
-| **\- [Internal-Model Risk](../risks/Communication-Risk.md#internal-model-risk)** | Involves reconciliation with another data source | | |
+| **\- [Internal-Model Risk](/tags/Internal-Model-Risk)** | Involves reconciliation with another data source | | |
| | Involves real-time data synchronisation | | |
| | | | |
-| **\- [Map-And-Territory Risk](../risks/Map-And-Territory-Risk.md)** | Use of metrics | | |
+| **\- [Map-And-Territory Risk](/tags/Map-And-Territory-Risk)** | Use of metrics | | |
| | Use of bonuses | | |
| | Competing targets / KPIs | | |
| | | | |
-| **[Coordination Risks](../risks/Coordination-Risk.md)** | Task will require an approval from someone outside the team | | |
+| **[Coordination Risks](/tags/Coordination-Risk)** | Task will require an approval from someone outside the team | | |
| | Task requires sign-off from a committee/board | | |
| | Requires other tasks to be completed | | |
| | Work must be coordinated amongst multiple stakeholders | | |
@@ -64,8 +64,8 @@ Download this in [PDF](/estimating/Interference-Checklist.pdf) or [Numbers](/est
| | Developers will be required to work together | | |
| | Teams are required to work together | | |
| | | | |
-| **[Complexity Risks](../risks/Complexity-Risk.md)** | | | |
-| **\- [Codebase Risk](../risks/Complexity-Risk.md#codebase-risk)** | Involves refactoring | | |
+| **[Complexity Risks](/tags/Complexity-Risk)** | | | |
+| **\- [Codebase Risk](/tags/Codebase-Risk)** | Involves refactoring | | |
| | Introduces new languages / DSLs | | |
| | Requires adding significant code | | |
| | Can’t be easily unit-tested | | |
@@ -73,7 +73,7 @@ Download this in [PDF](/estimating/Interference-Checklist.pdf) or [Numbers](/est
| | Requires deleting significant code | | |
| | Creates new repos | | |
| | | | |
-| **\- [Dead-End Risk](../risks/Complexity-Risk.md#dead-end-risk)** | Involves experimentation about best approach | | |
+| **\- [Dead-End Risk](/tags/Dead-End-Risk)** | Involves experimentation about best approach | | |
| | No prior work exists in this area | | |
| | Significant algorithmic innovation is required | | |
| | | | |
@@ -84,33 +84,33 @@ Download this in [PDF](/estimating/Interference-Checklist.pdf) or [Numbers](/est
| | Market choice | | |
| | | | |
| **[Feature Risks](/tags/Feature-Risk)** | | | |
-| **\- [Conceptual Integrity Risk](../risks/Feature-Risk.md#conceptual-integrity-risk)** | Requires new interface to be added | | |
+| **\- [Conceptual Integrity Risk](/tags/Conceptual-Integrity-Risk)** | Requires new interface to be added | | |
| | Requires refactoring of existing interfaces | | |
| | Deprecates existing functionality | | |
| | Requested by multiple stakeholders | | |
| | | | |
-| **\- [Regression Risk](../risks/Feature-Risk.md#regression-risk)** | Changes existing functionality | | |
+| **\- [Regression Risk](/tags/Regression-Risk)** | Changes existing functionality | | |
| | | | |
-| **\- [Feature-Access Risk](../risks/Feature-Risk.md#feature-access-risk)**| Interface Experimentation required | | |
+| **\- [Feature-Access Risk](/tags/Feature-Access-Risk)**| Interface Experimentation required | | |
| | Varied user population | | |
| | Accessibility requirements | | |
| | Localisation Requirements | | |
| | | | |
-| **\- [Implementation Risk](../risks/Feature-Risk.md#implementation-risk)** | Developer unfamiliar with the requirements / system | | |
+| **\- [Implementation Risk](/tags/Implementation-Risk)** | Developer unfamiliar with the requirements / system | | |
| | Known corner-cases | | |
| | Home-grown protocols vs. standards | | |
| | | | |
-| **\- [Feature-Fit](../risks/Feature-Risk.md#feature-fit-risk)**| Success criteria hard to define | | |
+| **\- [Feature-Fit](/tags/Feature-Fit-Risk)**| Success criteria hard to define | | |
| | Difficult-to-access user base | | |
| | | | |
-| **\- [Market Risk](../risks/Feature-Risk.md#market-risk)** | Rapidly changing market | | |
+| **\- [Market Risk](/tags/Market-Risk)** | Rapidly changing market | | |
| | Market needs are not clear | | |
| | Market itself is uncertain | | |
| | Product needs to find it’s market | | |
| | | | |
| **[Agency Risks](/tags/Agency-Risk)** /| 3rd Party involved | | |
-| **[Trust Risk](../risks/Communication-Risk.md#trust--belief-risk)** / | Competitor involvement | | |
-| **[Security Risks](../risks/Agency-Risk.md#security)**| General public involved | | |
+| **[Trust Risk](/tags/Trust-And-Belief-Risk)** / | Competitor involvement | | |
+| **[Security Risks](/tags/Security-Risk)**| General public involved | | |
| | Available on the open internet | | |
| | Requires authentication / authorisation schemes | | |
| | Requires cryptography | | |
@@ -120,7 +120,7 @@ Download this in [PDF](/estimating/Interference-Checklist.pdf) or [Numbers](/est
| | Involves payments | | |
| | Involves security infrastructure: firewalls, proxies, VPN etc. | | |
| | | | |
-| **[Dependency Risks](../risks/Dependency-Risk.md)** | | | |
+| **[Dependency Risks](/tags/Dependency-Risk)** | | | |
| **\- [Software Dependency Risk](/tags/Software-Dependency-Risk)**| Requires the introduction of a new dependency | | |
| | … which is immature | | |
| | … which must be chosen from competing alternatives | | |
@@ -128,24 +128,24 @@ Download this in [PDF](/estimating/Interference-Checklist.pdf) or [Numbers](/est
| | … which is In-House | | |
| | … which is Commercial | | |
| | | | |
-| **\- [Scarcity Risk](../risks/Scarcity-Risk.md)** | Requires booking time with a 3rd party | | |
+| **\- [Scarcity Risk](/tags/Scarcity-Risk)** | Requires booking time with a 3rd party | | |
| | Requires specific licenses / approvals | | |
| | | | |
-| **\- [Funding Risk](../risks/Scarcity-Risk.md#funding-risk)** | Requires payment by a customer for completed work | | |
+| **\- [Funding Risk](/tags/Funding-Risk)** | Requires payment by a customer for completed work | | |
| | Requires agreement on pricing / budget | | |
| | | | |
-| **\- [Staff Risk](../risks/Scarcity-Risk.md#staff-risk)** | Requires involvement from other members of staff | | |
+| **\- [Staff Risk](/tags/Staff-Risk)** | Requires involvement from other members of staff | | |
| | Requires hiring-in new specialist skills | | |
| | Has dependency on key-persons | | |
| | | | |
-| **\- [Red-Queen Risk](../risks/Scarcity-Risk.md#red-queen-risk)** | Dependency on rapidly changing/unpublished standards | | |
+| **\- [Red-Queen Risk](/tags/Red-Queen-Risk)** | Dependency on rapidly changing/unpublished standards | | |
| | Dependency on rapidly evolving 3rd party code | | |
| | | | |
-| **\- [Schedule Risk](../risks/Scarcity-Risk.md#schedule-risk)** | Task is repetitive | | |
+| **\- [Schedule Risk](/tags/Schedule-Risk)** | Task is repetitive | | |
| | Task takes a long time | | |
| | Task is unusually tedious | | |
| | | | |
-| **\- [Reliability Risk](../risks/Dependency-Risk.md#reliability-risk)** | Has strict reliability / response time requirements | | |
+| **\- [Reliability Risk](/tags/Reliability-Risk)** | Has strict reliability / response time requirements | | |
| | Has unusual hosting requirements | | |
| | Unfamiliar hardware involved | | |
| | | | |
@@ -156,7 +156,7 @@ Download this in [PDF](/estimating/Interference-Checklist.pdf) or [Numbers](/est
| **\- [Deadline Risk](/tags/Deadline-Risk)** | Has components that must be completed during certain time windows (e.g. weekends) | | |
| | Has components that must be completed before drop-dead dates | | |
| | | | |
-| **[Operational Risk](../risks/Operational-Risk.md)** | Requires new or extra production support | | |
+| **[Operational Risk](/tags/Operational-Risk)** | Requires new or extra production support | | |
| | Requires special roll-out | | |
| | Legal Requirements | | |
| | Regulatory Requirements | | |
diff --git a/docs/estimating/Journeys.md b/docs/estimating/Journeys.md
index a7315baa2..ebdbce44d 100644
--- a/docs/estimating/Journeys.md
+++ b/docs/estimating/Journeys.md
@@ -14,7 +14,7 @@ tweet: yes
# Journeys
-A third way to conceive of software development is as a _journey_ on the [Risk Landscape](../thinking/Glossary.md#risk-landscape). For example, in a startup we might start at a place where we have no product, no customers and some funding. We go on a journey of discovery and end up in a place where hopefully we _have_ a product, customers and an income stream.
+A third way to conceive of software development is as a _journey_ on the [Risk Landscape](/thinking/Glossary.md#risk-landscape). For example, in a startup we might start at a place where we have no product, no customers and some funding. We go on a journey of discovery and end up in a place where hopefully we _have_ a product, customers and an income stream.
There are many ways we could do this journey, and many destinations. The idea of "pivoting" your startup idea feels very true to the [Journey](Journeys.md) analogy, because that literally means changing direction. _The place where we were headed sucked, lets go over here_.
@@ -22,11 +22,11 @@ What does this journey look like in Risk-First terms?
![Product Development](/img/generated/estimating/journey.png)
-As this diagram shows, at the start we have plenty of [Feature Fit Risk](../risks/Feature-Risk.md#feature-fit-risk): if we have _no_ product, then it definitely doesn't fit our customer's needs! Also we have some amount of [Funding Risk](../risks/Scarcity-Risk.md#funding-risk), as at some point the money will run out.
+As this diagram shows, at the start we have plenty of [Feature Fit Risk](/tags/Feature-Fit-Risk): if we have _no_ product, then it definitely doesn't fit our customer's needs! Also we have some amount of [Funding Risk](/tags/Funding-Risk), as at some point the money will run out.
-After that, we use every trick in the book called "product development" to get to a new place on the [Risk Landscape](../thinking/Glossary.md#risk-landscape). This place (hopefully) will have a better risk profile than the one we came from.
+After that, we use every trick in the book called "product development" to get to a new place on the [Risk Landscape](/thinking/Glossary.md#risk-landscape). This place (hopefully) will have a better risk profile than the one we came from.
-If we're successful then yes, we'll have the [Operational Risk](../risks/Operational-Risk.md) of running a business, but hopefully we'll be in a better position than we started.
+If we're successful then yes, we'll have the [Operational Risk](/tags/Operational-Risk) of running a business, but hopefully we'll be in a better position than we started.
## A London Example
@@ -44,7 +44,7 @@ If you were doing this same journey on foot, it's a very direct route, but would
## Journey Risks
-In the software development past, _building it yourself_ was the only way to get anything done. It was like London _before road and rail_. Nowadays, you are bombarded with choices. It's actually _worse than London_ because it's not even a two-dimensional geographic space and there are multitudes of different routes and acceptable destinations. Journey planning on the software [Risk Landscape](../thinking/Glossary.md#risk-landscape) is an optimisation problem _par excellence_.
+In the software development past, _building it yourself_ was the only way to get anything done. It was like London _before road and rail_. Nowadays, you are bombarded with choices. It's actually _worse than London_ because it's not even a two-dimensional geographic space and there are multitudes of different routes and acceptable destinations. Journey planning on the software [Risk Landscape](/thinking/Glossary.md#risk-landscape) is an optimisation problem _par excellence_.
How can we think about estimating in such a domain? There are clearly a number of factors to come into play:
@@ -98,7 +98,7 @@ This should look a _fair bit_ like software architecture: often, we sketch out
At the other extreme, if we're estimating a single story, we can break down work like this. For development tasks which _look like a journey_, this is what I'm doing. _"If I build the Foo component using Spring and the Bar component in HTML, I can join them together with some Java code..."_
-Further, as we solve problems in our code-base, we break them down into smaller and smaller parts. (We'll come back to this in [Hierarchies](../complexity/Hierarchies.md).)
+Further, as we solve problems in our code-base, we break them down into smaller and smaller parts. (We'll come back to this in [Hierarchies](/complexity/Hierarchies.md).)
So **Journey Estimating** is three things all at once:
diff --git a/docs/estimating/Kitchen-Cabinet.md b/docs/estimating/Kitchen-Cabinet.md
index 7b0c2eea3..c4a6caaf5 100644
--- a/docs/estimating/Kitchen-Cabinet.md
+++ b/docs/estimating/Kitchen-Cabinet.md
@@ -77,7 +77,7 @@ Let's assume that the exponential distribution _does_ model software development
With any estimate, there are risks in both under- and over- estimating:
- - **Too Long**: In estimating too much time, you might not be given the work or your business might [miss the opportunity in the marketplace](../risks/Scarcity-Risk.md#opportunity-risk). A too cautious risk might doom a potentially successful project before it has even started.
+ - **Too Long**: In estimating too much time, you might not be given the work or your business might [miss the opportunity in the marketplace](/risks/Scarcity-Risk.md#opportunity-risk). A too cautious risk might doom a potentially successful project before it has even started.
- **Too Short**: If you estimate too little time, you might miss important coordinating dates with your marketing team, or miss the Christmas window, or run out of "runway".
@@ -114,7 +114,7 @@ Is lambda predictable on a project? It doesn't appear that there have been any
### When Does Risk Happen?
-Too-early and too-late risks are both [Scarcity Risks](../risks/Scarcity-Risk.md): they reflect the fact that time/budget/staff/opportunity are scarce resources which you can run out of.
+Too-early and too-late risks are both [Scarcity Risks](/risks/(/tags/Scarcity-Risk): they reflect the fact that time/budget/staff/opportunity are scarce resources which you can run out of.
But where is the risk accrued? If you give an estimate, you lock in a maximum too-early risk _at that point_. From then on, the clock is ticking: too-early risk decreases towards zero as the due-date approaches.
@@ -122,13 +122,13 @@ This is important: the point at which you present your estimate is the point of
![Accepting an estimate](/img/generated/estimating/accept_estimate.png)
-The diagram above is an example of this. A supplier is bidding for a contract with a client. The client has functionality they want build (or [Feature Risk](/tags/Feature-Risk) as we call it on Risk-First). The supplier needs money to keep their business going ([Funding Risk](../risks/Scarcity-Risk.md#funding-risk) on this diagram).
+The diagram above is an example of this. A supplier is bidding for a contract with a client. The client has functionality they want build (or [Feature Risk](/tags/Feature-Risk) as we call it on Risk-First). The supplier needs money to keep their business going ([Funding Risk](/tags/Funding-Risk) on this diagram).
-If the estimate is accepted, the supplier's [Funding Risk](../risks/Scarcity-Risk.md#funding-risk) is transferred to the client (the requester of the estimate). Conversely, the trade is that the client's [Feature Risk](/tags/Feature-Risk) is transferred to the supplier.
+If the estimate is accepted, the supplier's [Funding Risk](/tags/Funding-Risk) is transferred to the client (the requester of the estimate). Conversely, the trade is that the client's [Feature Risk](/tags/Feature-Risk) is transferred to the supplier.
-If the supplier is short on opportunities or funds, there is a tendency to under-estimate. That's because the [Feature Risk](/tags/Feature-Risk) is a problem for the supplier _in the future_, whereas their [Funding Risk](../risks/Scarcity-Risk.md#funding-risk) is a problem _right now_.
+If the supplier is short on opportunities or funds, there is a tendency to under-estimate. That's because the [Feature Risk](/tags/Feature-Risk) is a problem for the supplier _in the future_, whereas their [Funding Risk](/tags/Funding-Risk) is a problem _right now_.
-You can often see suppliers under-bid on projects because of this future discounting, which we discussed before in [Evaluating Risk](../thinking/Evaluating-Risk.md#discounting).
+You can often see suppliers under-bid on projects because of this future discounting, which we discussed before in [Evaluating Risk](/thinking/Evaluating-Risk.md#discounting).
This analysis also suggests something else: the process of giving and accepting estimates _transfers risk_. This is a key point which we'll return to later.
@@ -144,7 +144,7 @@ This means that clients often keep projects running for far longer than they sho
There is an alternative to too-early or too-late risk. You can always choose to be _on time_. This is definitely a choice: Just like a student can always hand _something_ in on assignment day (even if it's just a title scrawled on a piece of paper), you can always hand in whatever work you have.
-Then, instead of worrying about [Scarcity Risks](../risks/Scarcity-Risk.md), you are letting [Feature Risk](/tags/Feature-Risk) vary to take up the slack.
+Then, instead of worrying about [Scarcity Risks](/risks/(/tags/Scarcity-Risk), you are letting [Feature Risk](/tags/Feature-Risk) vary to take up the slack.
So far, we've seen two kinds of estimate: [Fill-The-Bucket](Fill-The-Bucket.md) and [Kitchen-Cabinet](Kitchen-Cabinet.md). Now, it's time to review a third - estimating [Journey Style](Journeys.md), and looking at how we can minimise [Feature Risk](/tags/Feature-Risk) within an available budget.
diff --git a/docs/estimating/On-Story-Points.md b/docs/estimating/On-Story-Points.md
index 0ef8b0ba8..2e4a22373 100644
--- a/docs/estimating/On-Story-Points.md
+++ b/docs/estimating/On-Story-Points.md
@@ -3,11 +3,11 @@ title: On Story Points
description: Part of the 'Estimating' Risk-First Track, about improving estimates using risk checklists.
date: 2021-05-08 13:32:03 +0000
-
featured:
class: bg1
element: 'On Story Points'
tags:
+ - Scrum
- Estimating
sidebar_position: 8
tweet: yes
@@ -37,7 +37,7 @@ At a basic level, to calculate the number of story points for an item of work, y
- **A Project**: Since the story will be embedded in the context of a project, this is an important input. On some projects, work is harder to complete than on others. Things like the choice of languages or architectures have an effect, as do the systems and people the project needs to interface with.
-- **Team Experience**: Over time, the team become more experienced both working with each other and with the project itself. They learn the [Risk Landscape](../risks/Risk-Landscape.md) and understand where the pitfalls lie and how to avoid them.
+- **Team Experience**: Over time, the team become more experienced both working with each other and with the project itself. They learn the [Risk Landscape](/risks/Risk-Landscape.md) and understand where the pitfalls lie and how to avoid them.
## Calculating Story Points
@@ -61,9 +61,9 @@ After some back-and-forth, the team agrees on a number. But what does this numb
- **Ideal Person-Days**: An obvious interpretation is that a story point is some number of person-days. In most of the planning sessions I've been involved in, there is either an explicit or tacit base-lining of story points so that everyone has a similar conception of how much work is involved in one, e.g. "A Story point is a morning". The "ideal" part refers to the actual time you get to spend on a task, away from interruptions, lunches, all-hands meetings and so on. The reason _not_ to use person days directly is that developers all work at different speeds.
-- **Complexity**: An alternate view is that [a story point is about _complexity_](https://www.clearvision-cm.com/blog/why-Story Points-are-a-measure-of-complexity-not-effort/). This means a Sprint is all about budgeting complexity, rather than effort. This makes some sense - [complexity is a recurring theme in Risk-First, after all.](../complexity/Start.md) However, given that the sprint is measured in person-days, and the scrum leader is going to produce a report showing how many story points were completed in a sprint, it's clear that complexity really is just a weak proxy for person-days anyway. In fact, there are lots of tasks that might be low-complexity, but take a lot of time anyway, such as designing 500 icons. This will clearly take a lot of time, but be low-complexity, so you better give it enough story points to represent the time you'll spend on it.
+- **Complexity**: An alternate view is that [a story point is about _complexity_](https://www.clearvision-cm.com/blog/why-Story Points-are-a-measure-of-complexity-not-effort/). This means a Sprint is all about budgeting complexity, rather than effort. This makes some sense but given that the sprint is measured in person-days, and the scrum leader is going to produce a report showing how many story points were completed in a sprint, it's clear that complexity really is just a weak proxy for person-days anyway. In fact, there are lots of tasks that might be low-complexity, but take a lot of time anyway, such as designing 500 icons. This will clearly take a lot of time, but be low-complexity, so you better give it enough story points to represent the time you'll spend on it.
-- **Relative Sizing**: A third way of looking at it is that really, story points are just about _relative_ sizing: it doesn't matter what they refer to or how big they are, it's all about trying to budget the right amount of work into the sprint. For example, you can either have two one-point stories, or a two-point story, and the effect on the sprint is the same. Because there is no fixed definition of the size of a story point, you do run the risk of story-point "inflation" or "deflation". But unless you are trying to use them to plot team productivity over time, this shouldn't really matter so much. And we'd never make the mistake of doing that, [right](../risks/Map-And-Territory-Risk.md)?
+- **Relative Sizing**: A third way of looking at it is that really, story points are just about _relative_ sizing: it doesn't matter what they refer to or how big they are, it's all about trying to budget the right amount of work into the sprint. For example, you can either have two one-point stories, or a two-point story, and the effect on the sprint is the same. Because there is no fixed definition of the size of a story point, you do run the risk of story-point "inflation" or "deflation". But unless you are trying to use them to plot team productivity over time, this shouldn't really matter so much. And we'd never make the mistake of doing that, [right](/tags/Map-And-Territory-Risk)?
## Observations
@@ -75,7 +75,7 @@ In his essay, "Choose Boring Technology", Dan McKinley describes a theoretical i
> "Let’s say every company gets about three innovation tokens. You can spend these however you want, but the supply is fixed for a long while... If you choose to write your website in NodeJS, you just spent one of your innovation tokens. If you choose to use MongoDB, you just spent one of your innovation tokens. If you choose to use service discovery tech that’s existed for a year or less, you just spent one of your innovation tokens... there are many choices of technology that are boring and good, or at least good enough. MySQL is boring. Postgres is boring. PHP is boring. " - [Choose Boring Technology, _Dan McKinley_](https://mcfunley.com/choose-boring-technology)
-What he's driving at here is of course _risk_: with shiny (i.e. non-boring) technology, you pick up lots of [Hidden Risk](../thinking/Glossary.md#hidden-risk). Innovation Tokens are paying for time spent dealing with [Hidden Risk](../thinking/Glossary.md#hidden-risk). Dan's contention is that not only do you have the up-front costs of integrating the shiny technology, but you also have a long tail of extra running costs, as you have to manage the new technology through to maturity in your environment.
+What he's driving at here is of course _risk_: with shiny (i.e. non-boring) technology, you pick up lots of [Hidden Risk](/thinking/Glossary.md#hidden-risk). Innovation Tokens are paying for time spent dealing with [Hidden Risk](/thinking/Glossary.md#hidden-risk). Dan's contention is that not only do you have the up-front costs of integrating the shiny technology, but you also have a long tail of extra running costs, as you have to manage the new technology through to maturity in your environment.
Put this way, couldn't story points be some kind of "Innovation Token"?
@@ -87,7 +87,7 @@ Sometimes, developers provide _tolerances_ around their story-point estimates, "
Another problem in Story Point estimation is bootstrapping. It is expected that, to start with, estimates made by inexperienced teams, or inexperienced team-members, are going to be poor. The expectation is also that over time, through domain experience, the estimates improve. This seems to happen _somewhat_ in my experience. But nowhere near enough.
-A common complaint when tasks overrun is that the team were blind-sided by [Hidden Risk](../thinking/Glossary.md#hidden-risk), but in my experience this boils down to two things:
+A common complaint when tasks overrun is that the team were blind-sided by [Hidden Risk](/thinking/Glossary.md#hidden-risk), but in my experience this boils down to two things:
- Genuine hidden risk, that no-one could have foreseen (e.g. a bug in a device driver that no-one knew about).
- Fake hidden risks, that could have been foreseen with the appropriate up-front effort (e.g. a design approval might take a bit longer than expected due to absence).
@@ -98,18 +98,18 @@ Below, I've sketched out a small section of what this might look like. The [nex
| **Area** | **Concern** | **Notes** | **Point Value** |
| -------------------------------------------- | --------------------------------------------------------------------------------- | --------- | --------------- |
-| **\- [Conceptual Integrity Risk](../risks/Feature-Risk.md#conceptual-integrity-risk)** | Requires new interface to be added | | |
+| **\- [Conceptual Integrity Risk](/tags/Conceptual-Integrity-Risk)** | Requires new interface to be added | | |
| | Requires refactoring of existing interfaces | | |
-| **\- [Feature-Access Risk](../risks/Feature-Risk.md#feature-access-risk)**| Interface Experimentation required | | |
+| **\- [Feature-Access Risk](/tags/Feature-Access-Risk)**| Interface Experimentation required | | |
| | Varied user population | | |
| | | | |
-| **\- [Implementation Risk](../risks/Feature-Risk.md#implementation-risk)** | Developer unfamiliar with the requirements / system | | |
-| **\- [Feature-Fit](../risks/Feature-Risk.md#feature-fit-risk)**| Success criteria hard to define | | |
+| **\- [Implementation Risk](/tags/Implementation-Risk)** | Developer unfamiliar with the requirements / system | | |
+| **\- [Feature-Fit](/tags/Feature-Fit-Risk)**| Success criteria hard to define | | |
| | Difficult-to-access user base | | |
By starting discussions with an Interference Checklist, we can augment the "play planning poker" process by _prompting people on things to think about_, like "Do we know what done looks like here?", "Is this going to affect some of our existing functionality?", "How are we going to get it tested?".
-A Checklist is a good way of asking questions in order that we can manage risk early on. It's all about turning a [Hidden Risk](../thinking/Glossary.md#hidden-risk) into one we've thought about.
+A Checklist is a good way of asking questions in order that we can manage risk early on. It's all about turning a [Hidden Risk](/thinking/Glossary.md#hidden-risk) into one we've thought about.
If the team runs through this list together, and then decides the task is a "five-story-pointer", then surely that is a better, more rigorous approach than just plucking a number out of the air, as planning poker suggests.
@@ -142,13 +142,13 @@ Maybe the Interference Checklist for it looks like this:
| **Area** | **Concern** | **Notes** | **Point Value** |
| -------------------------------------------- | --------------------------------------------------------------------------------- | --------- | --------------- |
-| **\- [Conceptual Integrity Risk](../risks/Feature-Risk.md#conceptual-integrity-risk)** | Requires new interface to be added | Yes, new screen | 1 |
+| **\- [Conceptual Integrity Risk](/tags/Conceptual-Integrity-Risk)** | Requires new interface to be added | Yes, new screen | 1 |
| | Requires refactoring of existing interfaces | | |
-| **\- [Feature-Access Risk](../risks/Feature-Risk.md#feature-access-risk)**| Interface Experimentation required | | 1 |
+| **\- [Feature-Access Risk](/tags/Feature-Access-Risk)**| Interface Experimentation required | | 1 |
| | Varied user population | | 1 |
| | | | |
-| **\- [Implementation Risk](../risks/Feature-Risk.md#implementation-risk)** | Developer unfamiliar with the requirements / system | | |
-| **\- [Feature-Fit](../risks/Feature-Risk.md#feature-fit-risk)**| Success criteria hard to define | | |
+| **\- [Implementation Risk](/tags/Implementation-Risk)** | Developer unfamiliar with the requirements / system | | |
+| **\- [Feature-Fit](/tags/Feature-Fit-Risk)**| Success criteria hard to define | | |
| | Difficult-to-access user base | Need to find a representative group | 2 |
| | | Total | 5 |
diff --git a/docs/estimating/Risk-First-Analysis.md b/docs/estimating/Risk-First-Analysis.md
index b53f251d5..2f11cf088 100644
--- a/docs/estimating/Risk-First-Analysis.md
+++ b/docs/estimating/Risk-First-Analysis.md
@@ -20,7 +20,7 @@ tweet: yes
The previous article, [Fixing Scrum](Fixing-Scrum.md), examined Scrum's idea of "Sprints" and concluded:
-- The main purpose of a Sprint is to ensure there is a **feedback loop**. Every two weeks (or however long the Sprint is) we have a Sprint Review, and review the code that has been completed during the Sprint. In Risk-First parlance, we call this [Meeting Reality](../thinking/Glossary.md#meet-reality). It is the process of _testing your ideas against reality_ to make sure they stand up.
+- The main purpose of a Sprint is to ensure there is a **feedback loop**. Every two weeks (or however long the Sprint is) we have a Sprint Review, and review the code that has been completed during the Sprint. In Risk-First parlance, we call this [Meeting Reality](/thinking/Glossary.md#meet-reality). It is the process of _testing your ideas against reality_ to make sure they stand up.
- This Sprint Review is performed by the whole team. All the code must be completed by the end of the sprint in order that it can be reviewed. This introduces an artificial deadline to be met.
@@ -28,19 +28,19 @@ The previous article, [Fixing Scrum](Fixing-Scrum.md), examined Scrum's idea of
![Scrum: Consequences Of Time-Boxing](/img/generated/estimating/planner/scrum-consequences.png)
-The diagram above shows this behaviour in the form of a [Risk-First Diagram](../thinking/Risk-First-Diagrams.md). Put briefly: _risks_ ([Schedule Risk](../risks/Scarcity-Risk.md#schedule-risk), [Feature Risk](/tags/Feature-Risk)) are addressed by actions such as "Development", "Review" or "Planning Poker".
+The diagram above shows this behaviour in the form of a [Risk-First Diagram](/thinking/Risk-First-Diagrams.md). Put briefly: _risks_ ([Schedule Risk](/tags/Schedule-Risk), [Feature Risk](/tags/Feature-Risk)) are addressed by actions such as "Development", "Review" or "Planning Poker".
-If you're new to [Risk-First](https://www.riskfirst.org) then it's probably worth explaining at this point that one of the purposes of this project is to enumerate the different types of risk you could face running a software project. You can begin to learn about them all [here](../risks/Start.md). Suffice to say, we have icons to represent each of these kinds of risks, and the rest of this article will introduce some of them to you in passing.
+If you're new to [Risk-First](https://www.riskfirst.org) then it's probably worth explaining at this point that one of the purposes of this project is to enumerate the different types of risk you could face running a software project. You can begin to learn about them all [here](/risks/Start.md). Suffice to say, we have icons to represent each of these kinds of risks, and the rest of this article will introduce some of them to you in passing.
##### On a Risk-First diagram, when you address a risk by taking an action, you draw a line through the risk.
## Estimating Is A Poor Tool
-Seen like this, **Planning Poker** is a tool to avoid the [Coordination Risk](../risks/Coordination-Risk.md) problem of everyone needing to complete their work for the end of the Sprint. But estimating is _really hard_: In this track so far we've looked at three different ways in which software estimation deviates from the straightforward extrapolation (a.k.a, [Fill-The-Bucket](Fill-The-Bucket.md)) we learnt about in maths classes at school:
+Seen like this, **Planning Poker** is a tool to avoid the [Coordination Risk](/tags/Coordination-Risk) problem of everyone needing to complete their work for the end of the Sprint. But estimating is _really hard_: In this track so far we've looked at three different ways in which software estimation deviates from the straightforward extrapolation (a.k.a, [Fill-The-Bucket](Fill-The-Bucket.md)) we learnt about in maths classes at school:
- [Kitchen Cabinet](Kitchen-Cabinet.md): In this domain, there is _hidden work_. We don't know how much there might be. If we can break down tasks into smaller units, then by the _law of averages_ and the _central limit theorem_, we can apply some statistics to figure out when we might finish.
- [Journeys](Journeys.md): In this domain, work is heterogeneous and interconnected. Different parts depend on each other, and a failure in one part might mean going right back to square one. The way to estimate in this domain is to _know the landscape_ and to build in _buffers_.
-- [Fractals](Fractals.md): In this domain, [Parkinson's Law](../risks/Process-Risk.md#bureaucracy) is king. There is always more work to be done. The best thing we can do is try and apply ourselves to the _highest value_ work at any given point, and frequently refer back to reality to find out if we're building the right thing.
+- [Fractals](Fractals.md): In this domain, [Parkinson's Law](/risks/Process-Risk.md#bureaucracy) is king. There is always more work to be done. The best thing we can do is try and apply ourselves to the _highest value_ work at any given point, and frequently refer back to reality to find out if we're building the right thing.
![Three Dimensions From Fill-The-Bucket](/img/estimates/dimensions.png)
@@ -67,7 +67,7 @@ How can we convert a planning session away from being estimate-focused and back
- Consideration for what is going on longer-term in the project.
- Consideration of risks besides how long something takes. Sure, that's important, because it affects _value_, but it's not the only thing to worry about.
- _Deciding what is important_ above _what can fit into a sprint_.
-- Making [Bets](../bets/Purpose-Development-Team.md): what actions give the biggest [Payoff](../thinking/Glossary.md#payoff) for the smallest [Stake](../thinking/Glossary.md#stake)?
+- Making [Bets](../bets/Purpose-Development-Team.md): what actions give the biggest [Payoff](/thinking/Glossary.md#payoff) for the smallest [Stake](/thinking/Glossary.md#stake)?
## A Scenario
@@ -94,7 +94,7 @@ On a Risk-First diagram, tasks - or actions as we call them - are shown in "sign
By fixing the rendering bug, we are trying to deal the problem that the software _demos badly_ and the resulting risk that the potential customers don't trust the quality of our product. Risk-First diagrams show chronology from left-to-right. That is, on the left of the action is the world as it is now, whereas on the right is the world as it will be _after_ taking some action. To show that our action will eliminate some existing risk, we can strike it out by drawing a line through it.
-So, this diagram encapsulates the reason why we might fix the rendering bug: it's about addressing potential [Trust Risk](../risks/Communication-Risk.md#trust--belief-risk) in our product.
+So, this diagram encapsulates the reason why we might fix the rendering bug: it's about addressing potential [Trust Risk](/tags/Trust-And-Belief-Risk) in our product.
## Question 2: What Do We Gain?
@@ -104,7 +104,7 @@ Let's move on to task 2, the **Search Function**, as shown in the above diagram.
As with the **Rendering Bug**, above, we lose something: [Feature Risk](/tags/Feature-Risk), which is the risk (to us) that the features our product is supplying don't meet the client's (or the market's) requirements. Writing code is all about identifying and removing [Feature Risk](/tags/Feature-Risk), and building products that fit the needs of their users.
-So as in the Rendering Bug example, we can show [Feature Risk](/tags/Feature-Risk) being eliminated by showing it on the left with a strike-out line. However, it's been established during analysis that the way to implement this feature is to introduce [ElasticSearch](https://www.elastic.co), a third-party piece of software. This in itself is an [Attendant Risk](../thinking/Glossary.md#attendant-risk) of taking that action:
+So as in the Rendering Bug example, we can show [Feature Risk](/tags/Feature-Risk) being eliminated by showing it on the left with a strike-out line. However, it's been established during analysis that the way to implement this feature is to introduce [ElasticSearch](https://www.elastic.co), a third-party piece of software. This in itself is an [Attendant Risk](/thinking/Glossary.md#attendant-risk) of taking that action:
- Are we going to find that easy to deploy and maintain?
- What impact will this have on hosting charges?
@@ -113,9 +113,9 @@ So as in the Rendering Bug example, we can show [Feature Risk](/tags/Feature-Ris
##### If an action leads to new risks, show them on the right side of the action.
-So, on the right side of the action, we are showing the [Attendant Risks](../thinking/Glossary.md#attendant-risk) we _gain_ from taking the action.
+So, on the right side of the action, we are showing the [Attendant Risks](/thinking/Glossary.md#attendant-risk) we _gain_ from taking the action.
-## Question 3: What Is The [Payoff](../thinking/Glossary.md#payoff)?
+## Question 3: What Is The [Payoff](/thinking/Glossary.md#payoff)?
![Calculating Payoff](/img/generated/estimating/planner/impact.png)
@@ -123,13 +123,13 @@ If we know what we lose and what we gain from each action we take, then it's sim
### Upside Risk
-It's worth noting - not all risks are bad! [Upside Risk](../thinking/Glossary.md#upside-risk) captures this concept well. If I buy a lottery ticket, there's a big risk that I'll have wasted some money buying the ticket. But there's also the [Upside Risk](../thinking/Glossary.md#upside-risk) that I might win! Both upside and downside risks should be captured in your analysis of [Payoff](../thinking/Glossary.md#payoff).
+It's worth noting - not all risks are bad! [Upside Risk](/thinking/Glossary.md#upside-risk) captures this concept well. If I buy a lottery ticket, there's a big risk that I'll have wasted some money buying the ticket. But there's also the [Upside Risk](/thinking/Glossary.md#upside-risk) that I might win! Both upside and downside risks should be captured in your analysis of [Payoff](/thinking/Glossary.md#payoff).
-While some projects are expressed in terms of addressing risks (e.g. installing a security system, replacing the tyres on your car) a lot are expressed in terms of _opportunities_ (e.g. create a new product market, win a competition). It's important to consider these longer-term objectives in the [Payoff](../thinking/Glossary.md#payoff).
+While some projects are expressed in terms of addressing risks (e.g. installing a security system, replacing the tyres on your car) a lot are expressed in terms of _opportunities_ (e.g. create a new product market, win a competition). It's important to consider these longer-term objectives in the [Payoff](/thinking/Glossary.md#payoff).
![Goals, Anti-Goals, Risks and Upside Risks](/img/generated/estimating/planner/focus.png)
-The diagram above lays these out: We'll work hard to _improve the probability_ of [Goals](../thinking/Glossary.md#goal) and [Upside Risks](../thinking/Glossary.md#upside-risk) occurring, whilst at the same time taking action to prevent [Anti-Goals](https://riskfirst.org/post/news/2020/01/17/Anti-Goals) and [Downside Risks](../thinking/Glossary.md#risk).
+The diagram above lays these out: We'll work hard to _improve the probability_ of [Goals](/thinking/Glossary.md#goal) and [Upside Risks](/thinking/Glossary.md#upside-risk) occurring, whilst at the same time taking action to prevent [Anti-Goals](https://riskfirst.org/post/news/2020/01/17/Anti-Goals) and [Downside Risks](/thinking/Glossary.md#risk).
(There's a gentle introduction to the idea of _Anti-Goals_ [here](https://riskfirst.org/post/news/2020/01/17/Anti-Goals) which might be worth the diversion).
@@ -147,29 +147,29 @@ Let's look at the last example: the action to fix the Continuous Integration Pi
![Fixing The CI Pipeline, v1](/img/generated/estimating/planner/ci-impact.png)
-The above diagram tries to show how this is: on the left side, we have the [Coordination Risk](../risks/Coordination-Risk.md) experienced by the Development Team. (Note the use of round-cornered boxes to show _who_ the risks apply to). On the right side, we have the [Deadline Risk](/tags/Deadline-Risk) experienced by the Sales Team.
+The above diagram tries to show how this is: on the left side, we have the [Coordination Risk](/tags/Coordination-Risk) experienced by the Development Team. (Note the use of round-cornered boxes to show _who_ the risks apply to). On the right side, we have the [Deadline Risk](/tags/Deadline-Risk) experienced by the Sales Team.
On the face of it, it's clear why the Sales Team might feel annoyed - there is a transfer of risk _away_ from the Development Team _to_ them. That's not fair! But the Development Team Lead might counter by saying: "Look, this issue is slowing down development, which might mean this startup runs out of funding before the product is ready for launch. Plus it's causing a loss of morale in our team and we're having trouble retaining good staff as it is".
![Fixing The Build, v2](/img/generated/estimating/planner/ci-impact-2.png)
-The above diagram models that. Fixing the CI Pipeline is now implicated in reducing [Staff Risk](../risks/Scarcity-Risk.md#staff-risk), [Coordination Risk](../risks/Coordination-Risk.md) and [Funding Risk](../risks/Scarcity-Risk.md#funding-risk) for the whole business and therefore seems like it might have a better [Payoff](../thinking/Glossary.md#payoff).
+The above diagram models that. Fixing the CI Pipeline is now implicated in reducing [Staff Risk](/tags/Staff-Risk), [Coordination Risk](/tags/Coordination-Risk) and [Funding Risk](/tags/Funding-Risk) for the whole business and therefore seems like it might have a better [Payoff](/thinking/Glossary.md#payoff).
## Judgement
-But is that a fair assessment? How would you determine the [Payoff](../thinking/Glossary.md#payoff) in this situation? It's clear that even though we might be able to _describe_ the risks, it might not be all that easy to _quantify_ them.
+But is that a fair assessment? How would you determine the [Payoff](/thinking/Glossary.md#payoff) in this situation? It's clear that even though we might be able to _describe_ the risks, it might not be all that easy to _quantify_ them.
![Calculating Payoff](/img/generated/estimating/planner/impact.png)
Luckily, we don't really have to. If I am trying to evaluate a single action on my own, all I really need to do is answer one question: do I lose more risk than I gain?
-All I need to do is "weigh up" the change in risks as best as I can. A lot of the time, the [Payoff](../thinking/Glossary.md#payoff) will be obviously worth it, or obviously not.
+All I need to do is "weigh up" the change in risks as best as I can. A lot of the time, the [Payoff](/thinking/Glossary.md#payoff) will be obviously worth it, or obviously not.
## Ensemble
-So far, we've been looking at each task individually, working out which risks we're addressing, and which ones we're exposed to as a result. If you have plenty of spare talent and only a few tasks, then maybe that's enough and you can get to work on all the tasks that have a positive [Payoff](../thinking/Glossary.md#payoff). But if you're constrained, then you should be hunting for the [actions](../thinking/Glossary.md#taking-action) with the biggest [Payoff](../thinking/Glossary.md#payoff) and doing those first.
+So far, we've been looking at each task individually, working out which risks we're addressing, and which ones we're exposed to as a result. If you have plenty of spare talent and only a few tasks, then maybe that's enough and you can get to work on all the tasks that have a positive [Payoff](/thinking/Glossary.md#payoff). But if you're constrained, then you should be hunting for the [actions](/thinking/Glossary.md#taking-action) with the biggest [Payoff](/thinking/Glossary.md#payoff) and doing those first.
-Things change too when you have a whole team engaged in the planning process. Although people will generally agree on what the risks _are_, they often will disagree on the [Probability they will occur, or the impact if they do](../thinking/Track-Risk.md#risk-registers). In cases like these, you might want to allow each stakeholder to "vote up" the risks they consider significant, or vote up the actions they consider to have high [Payoff](../thinking/Glossary.md#payoff). This will be covered in further detail in the [next section](Stop-Estimating-Start-Navigating.md).
+Things change too when you have a whole team engaged in the planning process. Although people will generally agree on what the risks _are_, they often will disagree on the [Probability they will occur, or the impact if they do](/thinking/Track-Risk.md#risk-registers). In cases like these, you might want to allow each stakeholder to "vote up" the risks they consider significant, or vote up the actions they consider to have high [Payoff](/thinking/Glossary.md#payoff). This will be covered in further detail in the [next section](Stop-Estimating-Start-Navigating.md).
But for now, let's talk about in which ways this is better or worse than Planning Poker.
diff --git a/docs/estimating/Start.md b/docs/estimating/Start.md
index 291996c5b..f0ffd7721 100644
--- a/docs/estimating/Start.md
+++ b/docs/estimating/Start.md
@@ -10,7 +10,7 @@ cat: Estimating
tags:
- Front
tweet: yes
-sidebar_position: 5
+sidebar_position: 8
---
# On Estimating
diff --git a/docs/estimating/Stop-Estimating-Start-Navigating.md b/docs/estimating/Stop-Estimating-Start-Navigating.md
index 8b9ca82aa..957f10bd8 100644
--- a/docs/estimating/Stop-Estimating-Start-Navigating.md
+++ b/docs/estimating/Stop-Estimating-Start-Navigating.md
@@ -19,7 +19,7 @@ This is the _ninth_ article in the [Risk-First](https://riskfirst.org) track on
- In article seven, we explored how [Scrum](Fixing-Scrum.md), the popular Agile methodology, fails to understand this crucial problem with estimates (among other failings).
-- Then, in [Risk-First Analysis](Risk-First-Analysis.md) we look at how we can work out what to build by examining what [risks](../thinking/Glossary.md#risk) we'd like to address and which [goals](../thinking/Glossary.md#risk) or [Upside Risks](../thinking/Glossary.md#upside-risk) we'd like to see happen.
+- Then, in [Risk-First Analysis](Risk-First-Analysis.md) we look at how we can work out what to build by examining what [risks](/thinking/Glossary.md#risk) we'd like to address and which [goals](/thinking/Glossary.md#risk) or [Upside Risks](/thinking/Glossary.md#upside-risk) we'd like to see happen.
So, now we're up to date. It's article nine, and I was going to build on [Risk-First Analysis](Risk-First-Analysis.md) to show how to plan work for a team of people over a week, a month, a year.
diff --git a/docs/estimating/_category_.yaml b/docs/estimating/_category_.yaml
index bc1184819..214305c6a 100644
--- a/docs/estimating/_category_.yaml
+++ b/docs/estimating/_category_.yaml
@@ -1,4 +1,4 @@
-position: 6
+position: 8
label: 'Estimating'
link:
type: doc
diff --git a/docs/methods/Start.md b/docs/methods/Start.md
index 06a0d1c4b..b7be8fc9e 100644
--- a/docs/methods/Start.md
+++ b/docs/methods/Start.md
@@ -1,6 +1,6 @@
---
-title: Methodology
-description: Some rough groupings of practices by methodology.
+title: Methods
+description: Some rough groupings of practices by development method.
featured:
class: c
@@ -9,7 +9,7 @@ layout: categories
cat: Complexity
tags:
- Front
-sidebar_position: 6
+sidebar_position: 5
tweet: yes
---
diff --git a/docs/misc/Anti-Goals.md b/docs/misc/Anti-Goals.md
index 1952c4865..bfa5aecba 100644
--- a/docs/misc/Anti-Goals.md
+++ b/docs/misc/Anti-Goals.md
@@ -45,9 +45,9 @@ Terry _busted through_ the Anti-Goal, and eventually released VVVVVV to critical
## Visualising Anti-Goals
-Goals and Anti-Goals are both kinds of [Risks](../thinking/Glossary.md#risk). While Goals are "Upside" risks or opportunities (the outcome is uncertain, but likely to be in your favour), Anti-Goals are "Downside" risks (again, uncertain outcome, likely to go against you): you'll want to try to navigate between these to arrive at the Goal, rather than the Anti-Goal.
+Goals and Anti-Goals are both kinds of [Risks](/thinking/Glossary.md#risk). While Goals are "Upside" risks or opportunities (the outcome is uncertain, but likely to be in your favour), Anti-Goals are "Downside" risks (again, uncertain outcome, likely to go against you): you'll want to try to navigate between these to arrive at the Goal, rather than the Anti-Goal.
-Here at [Risk-First](https://riskfirst.org), there's lots of talk about navigating the [Risk Landscape](../risks/Risk-Landscape.md), which you can imagine being like the terrain of a golf course (as in the diagram above).
+Here at [Risk-First](https://riskfirst.org), there's lots of talk about navigating the [Risk Landscape](/risks/Risk-Landscape.md), which you can imagine being like the terrain of a golf course (as in the diagram above).
Sporting analogies are an over-used pedagogic tool, for writers of limited imagination. But I'm not going to let that stop me from beating this to death:
@@ -73,7 +73,7 @@ But Anti-Goals are a demonstration of the fact that sometimes, you can't just ca
- If I'm building a secure website, the anti-goal might be _accidentally publishing sensitive details_.
- If I'm adding features to my product, an anti-goal might be _making the product harder to use_.
-We almost have a blind-spot when it comes to the anti-goals, or at least, we create further task-ticket for them to make them someone else's problem, or dump them on a [Risk Register](../thinking/Just-Risk.md) to be forgotten about.
+We almost have a blind-spot when it comes to the anti-goals, or at least, we create further task-ticket for them to make them someone else's problem, or dump them on a [Risk Register](/thinking/Just-Risk.md) to be forgotten about.
We need to acknowledge that pursuing certain goals via certain courses of action exposes us to Anti-Goals along the way. Maybe we can frame this in Behaviour-Driven Design?
diff --git a/docs/overview/Quick-Summary.md b/docs/overview/Quick-Summary.md
index aa6edfa84..016c080fb 100644
--- a/docs/overview/Quick-Summary.md
+++ b/docs/overview/Quick-Summary.md
@@ -15,7 +15,7 @@ tweet: yes
## 1. There are Lots of Ways to Run Software Projects
-There are lots of ways to look at a project in-flight. For example, metrics such as “number of open tickets”, “story points”, “code coverage" or "release cadence" give us a numerical feel for how things are going and what needs to happen next. We also judge the health of projects by the practices used on them, such as [Continuous Integration](../practices/Testing.md#continuous-integration), [Unit Testing](../practices/Testing.md) or [Pair Programming](../practices/Coding.md).
+There are lots of ways to look at a project in-flight. For example, metrics such as “number of open tickets”, “story points”, “code coverage" or "release cadence" give us a numerical feel for how things are going and what needs to happen next. We also judge the health of projects by the practices used on them, such as [Continuous Integration](/practices/Testing.md#continuous-integration), [Unit Testing](/tags/Automated-Testing) or [Pair Programming](/tags/Pair-Programming).
Software methodologies, then, are collections of tools and practices: “Agile”, “Waterfall”, “Lean” or “Phased Delivery” all prescribe different approaches to running a project and are opinionated about the way they think projects should be done and the tools that should be used.
@@ -25,11 +25,11 @@ A key question then is: **how do we select the right tools for the job?**
## 2. We Can Look at Projects in Terms of Risks
-One way to examine the project in-flight is by looking at the [risks](../thinking/Glossary.md#risk) it faces.
+One way to examine the project in-flight is by looking at the [risks](/thinking/Glossary.md#risk) it faces.
Commonly, tools such as [RAID logs](https://www.projectmanager.com/blog/raid-log-use-one) and [RAG status](https://pmtips.net/blog-new/what-does-rag-status-mean) reporting are used. These techniques should be familiar to project managers and developers everywhere.
-However, the Risk-First view is that we can go much further: that each item of work being done on the project is to manage a particular risk. [Risk](../thinking/Glossary.md#risk) isn't something that just appears in a report, it actually drives *everything we do*.
+However, the Risk-First view is that we can go much further: that each item of work being done on the project is to manage a particular risk. [Risk](/thinking/Glossary.md#risk) isn't something that just appears in a report, it actually drives *everything we do*.
For example:
@@ -37,7 +37,7 @@ For example:
- A task about improving the health indicators could be seen as mitigating _the risk of the application failing and no-one reacting to it_.
- Even a task as basic as implementing a new function in the application is mitigating _the risk that users are dissatisfied and go elsewhere_.
-One assertion of Risk-First is that **every action you take on a project is to manage a [risk](../thinking/Glossary.md#risk).**
+One assertion of Risk-First is that **every action you take on a project is to manage a [risk](/thinking/Glossary.md#risk).**
## 3. We Can Break Down Risks on a Project Methodically
@@ -54,7 +54,7 @@ Software risks are difficult to quantify and mostly the effort involved in doing
With this in place, we can:
- Talk about the types of risks we face on our projects, using an appropriate language.
-- Anticipate [Hidden Risks](../thinking/Glossary.md#hidden-risk) that we hadn't considered before.
+- Anticipate [Hidden Risks](/thinking/Glossary.md#hidden-risk) that we hadn't considered before.
- Weigh the risks against each other and decide which order to tackle them.
## 4. We Can Analyse Tools and Techniques in Terms of how they Manage Risk
@@ -63,8 +63,8 @@ If we accept the assertion that _all_ the actions we take on a project are about
For example:
- - If we do a [Code Review](../practices/Review.md), we are partly trying to minimise the risks of bugs slipping through into production and also manage the [Key Person Risk](../risks/Scarcity-Risk.md#staff-risk) of knowledge not being widely-enough shared.
- - If we write [Unit Tests](../practices/Testing.md), we’re addressing the risk of bugs going to production. We’re also mitigating against the risk of _regression_ and future changes breaking our existing functionality.
+ - If we do a [Code Review](/tags/Review), we are partly trying to minimise the risks of bugs slipping through into production and also manage the [Key Person Risk](/tags/Staff-Risk) of knowledge not being widely-enough shared.
+ - If we write [Unit Tests](/tags/Automated-Testing), we’re addressing the risk of bugs going to production. We’re also mitigating against the risk of _regression_ and future changes breaking our existing functionality.
- If we enter into a contract with a supplier then we are mitigating the risk of the supplier vanishing and leaving us exposed. With the contract in place we have legal recourse against this risk.
From the above examples, it's clear that **different tools are appropriate for managing different types of risks.**
@@ -91,9 +91,9 @@ We have described a model of risk within software projects, looking something li
How do we take this further?
-One idea explored is the _[Risk Landscape](../risks/Risk-Landscape.md)_: although the software team can't remove risk from their project, they can take actions that move them to a place in the [Risk Landscape](../risks/Risk-Landscape.md) where the risks on the project are more favourable than where they started.
+One idea explored is the _[Risk Landscape](/risks/Risk-Landscape.md)_: although the software team can't remove risk from their project, they can take actions that move them to a place in the [Risk Landscape](/risks/Risk-Landscape.md) where the risks on the project are more favourable than where they started.
-From there, we examine basic risk archetypes you will encounter on the software project, to build up a [vocabulary of Software Risk](../risks/Staging-And-Classifying.md) and look at which specific tools you can use to mitigate each kind of risk.
+From there, we examine basic risk archetypes you will encounter on the software project, to build up a [vocabulary of Software Risk](/risks/Staging-And-Classifying.md) and look at which specific tools you can use to mitigate each kind of risk.
Then, we look at software practices and how they manage various risks. Beyond this we examine the question: _how can a Risk-First approach inform the use of this practice?_
diff --git a/docs/overview/Start.md b/docs/overview/Start.md
index 0248fc8c0..aea5aa0e8 100644
--- a/docs/overview/Start.md
+++ b/docs/overview/Start.md
@@ -21,7 +21,7 @@ The software development world is crowded with different practices, metrics, met
The Risk-First perspective is that all of these practices and methodologies have at their heart the job of managing different _software development risks_. Risk isn't something that just appears in a report, it actually drives everything we do.
-## Outcomes
+## Introductory Articles
The articles linked below aim to give you a taster of what Risk-First is about, and how to navigate the material on this site.
diff --git a/docs/overview/Tracks.md b/docs/overview/Tracks.md
index bc9fa9038..ac114341d 100644
--- a/docs/overview/Tracks.md
+++ b/docs/overview/Tracks.md
@@ -18,16 +18,8 @@ tweet: yes
There is quite a lot of material on this site so to aid digestion Risk-First is split into several main _tracks_. These are shown on the menu at the top of this page and also in the navigator. Here's what we have so far:
-1. **The Overview (This Bit)**: Summary materials and how to navigate the site.
-
-2. **[Thinking Risk-First](../thinking/Start.md)**: Thinking about Software Development from a new perspective - it's not about code and issues and bugs and releases. Those things represent ways in which we deal with risks.
-
-3. **[Risks](../risks/Start.md)**: Here, we try and break down the different risks that you face on a software project and talk at a high level about the kind of actions you take to deal with them.
-
-4. **[On Bets](../bets/Start.md)**: If software development is all about risk then doesn't that make us all gamblers? Here, we look at the types of bets we make every day when building software and look at how we might try to maximise our profits.
-
-5. **[On Estimating](../estimating/Start.md)**: _Estimating_ is the _bête noire_ of software development: simple to conceptualise but a mine-field for the unwary. Here, we take it apart from a Risk-First perspective to try and understand _why_ it is so difficult and what we should do about it.
-
-6. **[On Complexity](../complexity/Start.md)**: _(Under Construction)_ The complexity of the work we do is a big source of risk. Can we understand it better?
-
-If you're just starting with Risk-First, then let's head to [Thinking Risk-First](../thinking/Start.md) next...
\ No newline at end of file
+
+
+## Lets Go!
+
+If you're just starting with Risk-First, then let's head to [Thinking Risk-First](/thinking/Start.md) next...
\ No newline at end of file
diff --git a/docs/practices/Deployment-And-Operations/Automation.md b/docs/practices/Deployment-And-Operations/Automation.md
index a7df5ea82..6d26f64c1 100644
--- a/docs/practices/Deployment-And-Operations/Automation.md
+++ b/docs/practices/Deployment-And-Operations/Automation.md
@@ -49,6 +49,10 @@ practice:
One of the key ways to measure whether your team is doing _useful work_ is to look at whether, in fact, it can be automated. And this is the spirit of [DevOps](DevOps) - the idea that people in general are poor at repeatable tasks, and anything people do repeatedly _should_ be automated.
+See:
+
+ - [Automation (Meeting Reality)](/thinking/Meeting-Reality.md#example-automation)
+ - [The Purpose of Process](/risks/Process-Risk.md#the-purpose-of-process)
## See Also
diff --git a/docs/practices/Deployment-And-Operations/Configuration-Management.md b/docs/practices/Deployment-And-Operations/Configuration-Management.md
index b3a5a1869..64860d476 100644
--- a/docs/practices/Deployment-And-Operations/Configuration-Management.md
+++ b/docs/practices/Deployment-And-Operations/Configuration-Management.md
@@ -12,6 +12,7 @@ practice:
- "CM"
- "SCM"
- "Software Configuration Management"
+ - "Feature Toggle"
mitigates:
- tag: Implementation Risk
reason: "Establishes and maintains consistency in the software product's performance and attributes."
@@ -40,6 +41,10 @@ practice:
Configuration Management (CM) involves systematically handling changes to ensure the system maintains its integrity over time. It includes practices and tools for managing changes, tracking their status, and maintaining an inventory of system and support documents. CM is critical in software engineering to handle changes efficiently, reduce risks, and ensure the system performs as intended throughout its lifecycle.
+See:
+
+ - [Consider Payoff](/thinking/Consider-Payoff.md)
+
## See Also
\ No newline at end of file
diff --git a/docs/practices/Deployment-And-Operations/Demand-Management.md b/docs/practices/Deployment-And-Operations/Demand-Management.md
new file mode 100644
index 000000000..ce30e5148
--- /dev/null
+++ b/docs/practices/Deployment-And-Operations/Demand-Management.md
@@ -0,0 +1,52 @@
+---
+title: Demand Management
+description: The practice of forecasting, planning, and managing the demand for products or services to ensure that they meet the business objectives and customer needs.
+tags:
+ - Planning-Management
+ - Demand-Management
+featured:
+ class: c
+ element: 'Demand Management'
+practice:
+ aka:
+ - "Demand Planning"
+ - "Demand Forecasting"
+ - "Resource Planning"
+ - "Capacity Planning"
+ mitigates:
+ - tag: Resource Risk
+ reason: "Helps in efficiently allocating resources to meet the demand without overburdening the team."
+ - tag: Schedule Risk
+ reason: "Ensures that the demand is managed to meet delivery schedules."
+ - tag: Market Risk
+ reason: "Aligns production with market demand, reducing the risk of under or overproduction."
+ attendant:
+ - tag: Complexity Risk
+ reason: "Forecasting and planning demand can add complexity to project management."
+ - tag: Inflexibility Risk
+ reason: "Strict demand management can limit the ability to respond to unexpected changes."
+ - tag: Accuracy Risk
+ reason: "Inaccurate demand forecasts can lead to resource misallocation."
+ related:
+ - ../Planning-And-Management/Prioritising
+ - ../Planning-And-Management/Requirements-Capture
+---
+
+
+
+## Description
+
+> "Demand management is a planning methodology used to forecast, plan for and manage the demand for products and services. It is a key process in supply chain management." - [Demand Management, _Wikipedia_](https://en.wikipedia.org/wiki/Demand_management)
+
+Demand Management involves forecasting, planning, and managing the demand for products or services to ensure they align with business objectives and customer needs. This practice helps in balancing supply and demand, optimizing resource utilization, and enhancing customer satisfaction by ensuring timely delivery of products or services.
+
+TODO: buffers, queues, pools, kanban
+
+See:
+
+- [Scarcity Risk](/risks/Scarcity-Risk.md#mitigations)
+
+
+## See Also
+
+
diff --git a/docs/practices/Deployment-And-Operations/Monitoring.md b/docs/practices/Deployment-And-Operations/Monitoring.md
index b4e2b1f91..a74925d51 100644
--- a/docs/practices/Deployment-And-Operations/Monitoring.md
+++ b/docs/practices/Deployment-And-Operations/Monitoring.md
@@ -40,6 +40,11 @@ practice:
Monitoring encompasses a wide range of practices designed to ensure that systems operate efficiently and without interruption. This includes tracking the performance, availability, and security of networks, systems, and applications. Effective monitoring helps in early detection of issues, allowing for prompt resolution and minimizing the impact on operations.
+See:
+ - [Operations Management](/risks/Operational-Risk.md#operations-management)
+ - [Monitoring](/risks/Agency-Risk.md#monitoring)
+ - [Control](/risks/Operational-Risk.md#control)
+
## See Also
diff --git a/docs/practices/Deployment-And-Operations/Release.md b/docs/practices/Deployment-And-Operations/Release.md
index 4412e8ed0..3d605a39d 100644
--- a/docs/practices/Deployment-And-Operations/Release.md
+++ b/docs/practices/Deployment-And-Operations/Release.md
@@ -38,6 +38,12 @@ practice:
Release / Delivery involves the structured and controlled process of moving software from development to production environments. It ensures that all aspects of the software are ready for deployment, including code stability, functionality, and performance. Effective release management is crucial for maintaining the quality and reliability of software, minimizing disruptions, and ensuring that new features and fixes reach users in a timely manner.
+
+See:
+- [Development Process](/thinking/Development-Process.md#a-toy-process)
+- [Consider Payoff](/thinking/Consider-Payoff.md#example-4-continue-testing-or-release)
+- [Production (Cadence)](/thinking/Cadence.md#production)
+
## See Also
diff --git a/docs/practices/Development-And-Coding/Coding.md b/docs/practices/Development-And-Coding/Coding.md
index 01ed723cb..2195e2dfc 100644
--- a/docs/practices/Development-And-Coding/Coding.md
+++ b/docs/practices/Development-And-Coding/Coding.md
@@ -38,6 +38,11 @@ practice:
Coding is a core activity in software development, involving the translation of requirements and designs into functional code. High-quality coding practices are essential for creating reliable, maintainable, and efficient software. This involves writing clear, well-structured, and documented code that adheres to established standards and best practices.
+See:
+
+ - [Time/Reality Tradeoff](/thinking/Cadence.md#time--reality-trade-off)
+
+
## See Also
diff --git a/docs/practices/Development-And-Coding/Pair-Programming.md b/docs/practices/Development-And-Coding/Pair-Programming.md
index b170b2088..3953af314 100644
--- a/docs/practices/Development-And-Coding/Pair-Programming.md
+++ b/docs/practices/Development-And-Coding/Pair-Programming.md
@@ -42,6 +42,11 @@ practice:
Pair Programming involves two developers working together on the same code. One developer writes the code while the other reviews each line in real-time, providing instant feedback and suggestions. This practice not only improves code quality but also facilitates knowledge sharing and collaboration between team members.
+See:
+
+ - [Crisis Mode](/thinking/Crisis-Mode.md)
+
+
## See Also
diff --git a/docs/practices/Development-And-Coding/Prototyping.md b/docs/practices/Development-And-Coding/Prototyping.md
index 9fa8d17ad..c3c20a5e8 100644
--- a/docs/practices/Development-And-Coding/Prototyping.md
+++ b/docs/practices/Development-And-Coding/Prototyping.md
@@ -38,6 +38,10 @@ practice:
Prototyping in software development involves creating early models or mockups of the software to test concepts and gather feedback. This practice helps in validating design choices, identifying potential issues, and ensuring that the final product meets the users' needs and expectations.
+See:
+ - [Spike Solution (Coding Bets)](../bets/Coding-Bets.md#spike-solutions-a-new-technology-bet)
+
+
## See Also
diff --git a/docs/practices/Development-And-Coding/Refactoring.md b/docs/practices/Development-And-Coding/Refactoring.md
index 94fa49369..815912b16 100644
--- a/docs/practices/Development-And-Coding/Refactoring.md
+++ b/docs/practices/Development-And-Coding/Refactoring.md
@@ -15,6 +15,7 @@ practice:
- "Factoring"
- "Separation of Concerns"
- "Modularisation"
+ - "Creating Abstractions"
mitigates:
- tag: Complexity Risk
reason: "Refactoring is aimed at making code more orthogonal, less duplicative and clearer to understand"
@@ -42,6 +43,18 @@ practice:
Refactoring involves revising and restructuring existing code to improve its readability, maintainability, and performance without changing its external behavior. This practice helps in reducing technical debt, enhancing code quality, and making the codebase easier to understand and modify.
+## Abstractions
+
+Refactoring is all about ensuring you have the right abstractions.
+
+> "An abstraction" is the outcome of this process—a concept that acts as a common noun for all subordinate concepts and connects any related concepts as a group, field, or category.
+
+See:
+
+ - [Refactoring](/risks/Complexity-Risk.md#refactoring)
+ - [The Power of Abstractions](/risks/Staging-And-Classifying.md#the-power-of-abstractions)
+ - [Hierarchies and Modularisation](/risks/Complexity-Risk.md#hierarchies-and-modularisation)
+
## External References
diff --git a/docs/practices/Development-And-Coding/Runtime-Adoption.md b/docs/practices/Development-And-Coding/Runtime-Adoption.md
index 6bc78a4e7..16f231289 100644
--- a/docs/practices/Development-And-Coding/Runtime-Adoption.md
+++ b/docs/practices/Development-And-Coding/Runtime-Adoption.md
@@ -40,6 +40,13 @@ practice:
Adoption of standards and libraries involves implementing and adhering to established standards and integrating widely-used libraries in software development. This practice helps in ensuring consistency, reliability, and maintainability of the software by leveraging proven solutions.
+See:
+
+ - [Languages and Dependencies](/risks/Complexity-Risk.md#languages-and-dependencies)
+ - [Software Libraries (Software Dependency Risk)](/risks/Software-Dependency-Risk.md#2-software-libraries)
+ - [Software-as-a-Service (Software Dependency Risk)](/risks/Software-Dependency-Risk.md#3--software-as-a-service)
+
+
## See Also
diff --git a/docs/practices/Development-And-Coding/Standardisation.md b/docs/practices/Development-And-Coding/Standardisation.md
new file mode 100644
index 000000000..0321957d4
--- /dev/null
+++ b/docs/practices/Development-And-Coding/Standardisation.md
@@ -0,0 +1,50 @@
+---
+title: Standardisation
+description: The practice of establishing and adhering to standards to ensure consistency, compatibility, and quality in software development.
+tags:
+ - Tools-Standards
+ - Standardisation
+featured:
+ class: c
+ element: 'Standardisation'
+practice:
+ aka:
+ - "Standardization"
+ - "Normalization"
+ - "Uniformity"
+ - "Consistency"
+ - "Re-Use"
+ mitigates:
+ - tag: Feature Fit Risk
+ reason: "Ensures that the features conform to predefined standards, reducing variability."
+ - tag: Operational Risk
+ reason: "Reduces operational errors by providing clear guidelines and protocols."
+ - tag: Communication Risk
+ reason: "Improves communication by using a common language and standardized terms."
+ attendant:
+ - tag: Inflexibility Risk
+ reason: "May limit creativity and flexibility by enforcing strict adherence to standards."
+ - tag: Implementation Risk
+ reason: "Can introduce complexity and delays during the implementation phase."
+ - tag: Compliance Risk
+ reason: "Ensuring continuous compliance with evolving standards can be challenging."
+ related:
+ - ../Development-And-Coding/Coding
+ - ../Tools-And-Standards/Configuration-Management
+---
+
+
+
+## Description
+
+> "Standardization (or standardisation) is the process of developing and implementing technical standards. It can help to maximize compatibility, interoperability, safety, repeatability, or quality." - [Standardization, _Wikipedia_](https://en.wikipedia.org/wiki/Standardization)
+
+Standardisation involves creating, implementing, and enforcing standards and guidelines to ensure consistency, compatibility, and quality across software projects. This practice helps in maintaining uniformity, reducing complexity, and improving communication among team members and stakeholders.
+
+See:
+- [Unwritten Software (Software Dependency Risk)](/risks/Software-Dependency-Risk.md#unwritten-software)
+
+
+## See Also
+
+
\ No newline at end of file
diff --git a/docs/practices/Development-And-Coding/Tool-Adoption.md b/docs/practices/Development-And-Coding/Tool-Adoption.md
index 948782172..e5084f87b 100644
--- a/docs/practices/Development-And-Coding/Tool-Adoption.md
+++ b/docs/practices/Development-And-Coding/Tool-Adoption.md
@@ -51,7 +51,7 @@ In general, unless the problem is somehow _specific to your circumstances_ it ma
Tools in general are _good_ and _worth using_ if they offer you a better risk return than you would have had from not using them.
-But, this is a low bar - some tools offer _amazing_ returns on investment. The [Silver Bullets](../complexity/Silver-Bullets.md) article describes in general some of these:
+But, this is a low bar - some tools offer _amazing_ returns on investment. The [Silver Bullets](/complexity/Silver-Bullets.md) article describes in general some of these:
- Assemblers
- Compilers
- Garbage Collection
@@ -62,7 +62,7 @@ But, this is a low bar - some tools offer _amazing_ returns on investment. The
A _really good tool_ offers such advantages that not using it becomes _unthinkable_: Linux is heading towards this point. For Java developers, the JVM is there already.
-Picking new tools and libraries should be done **very carefully**: you may be stuck with your choices for some time. Here is a [short guide that might help](../risks/Dependency-Risk.md).
+Picking new tools and libraries should be done **very carefully**: you may be stuck with your choices for some time. Here is a [short guide that might help](/tags/Dependency-Risk).
## See Also
diff --git a/docs/practices/Development-And-Coding/Version-Control.md b/docs/practices/Development-And-Coding/Version-Control.md
index 981231bbd..0cd5b7342 100644
--- a/docs/practices/Development-And-Coding/Version-Control.md
+++ b/docs/practices/Development-And-Coding/Version-Control.md
@@ -8,7 +8,7 @@ featured:
class: c
element: 'Version Control'
practice:
- aka:
+ aka:
- "Source Control"
- "Revision Control"
- "SCM"
@@ -19,6 +19,9 @@ practice:
reason: "Facilitates collaboration by allowing multiple developers to work on the codebase simultaneously."
- tag: Regression Risk
reason: "Maintains a history of changes, allowing rollback to previous versions if needed."
+ attendant:
+ - tag: Invisibility Risk
+ reason: "Poor version management can be chaotic and leave lots of work in progress."
related:
- ../Planning-and-Management/Change-Management
- ../Development-and-Coding/Coding
diff --git a/docs/practices/External-Relations/Analysis.md b/docs/practices/External-Relations/Analysis.md
index 46dccb0d4..36979727a 100644
--- a/docs/practices/External-Relations/Analysis.md
+++ b/docs/practices/External-Relations/Analysis.md
@@ -42,6 +42,10 @@ practice:
Analysis in software development involves examining and breaking down the requirements, systems, and processes to understand the needs and ensure the correct implementation of the software. This practice is crucial for identifying potential issues, clarifying requirements, and ensuring that the development aligns with business goals and user needs.
+See:
+
+ - [Environmental Scanning](/risks/Operational-Risk.md#scanning-the-operational-context)
+
## See Also
diff --git a/docs/practices/External-Relations/Fundraising.md b/docs/practices/External-Relations/Fundraising.md
new file mode 100644
index 000000000..f476de107
--- /dev/null
+++ b/docs/practices/External-Relations/Fundraising.md
@@ -0,0 +1,51 @@
+---
+title: Fundraising
+description: The practice of securing funding from investors to support the growth and development of a startup.
+tags:
+ - Startup
+ - Investment
+ - Capital
+featured:
+ class: c
+ element: 'Fundraising'
+practice:
+ aka:
+ - "Raising Investment Capital"
+ - "Venture Capital"
+ - "Seed Funding"
+ - "Angel Investment"
+ mitigates:
+ - tag: Resource Risk
+ reason: "Provides necessary financial resources to support the startup’s operations and growth."
+ - tag: Market Risk
+ reason: "Allows the startup to invest in market research and customer acquisition."
+ - tag: Development Risk
+ reason: "Enables the startup to fund product development and innovation."
+ attendant:
+ - tag: Ownership Risk
+ reason: "Involves giving up a portion of ownership and control to investors."
+ - tag: Dependency Risk
+ reason: "Creates a dependency on investors and their continued support."
+ - tag: Pressure Risk
+ reason: "Introduces pressure to meet investor expectations and deliver returns."
+ related:
+ - ../Planning-And-Management/Stakeholder-Management
+ - ../Planning-And-Management/Requirements-Capture
+---
+
+
+
+## Description
+
+> "Raising investment capital involves securing funding from investors to support the growth and development of a startup. This process typically includes creating a compelling pitch, meeting with potential investors, and negotiating terms." - [Raising Investment Capital, _Wikipedia_](https://en.wikipedia.org/wiki/Venture_capital)
+
+Fundraising is a critical practice for startups looking to scale their operations, develop new products, and enter new markets. It involves engaging with venture capitalists, angel investors, and other funding sources to secure the necessary financial resources. Effective fundraising requires a clear business plan, a compelling value proposition, and strong stakeholder management skills.
+
+See:
+ - [Funding Risk](/tags/Funding-Risk)
+
+
+## See Also
+
+
+
diff --git a/docs/practices/External-Relations/Outsourcing.md b/docs/practices/External-Relations/Outsourcing.md
index cdab3cf51..4afd579d2 100644
--- a/docs/practices/External-Relations/Outsourcing.md
+++ b/docs/practices/External-Relations/Outsourcing.md
@@ -42,13 +42,13 @@ Outsourcing in software development involves hiring external vendors or service
## Discussion
-**Pairing** and **Mobbing** as mitigations to [Coordination Risk](../risks/Coordination-Risk.md) are easiest when developers are together in the same room. But it doesn't always work out like this. Teams spread in different locations and timezones naturally don't have the same [communication bandwidth](/tags/Communication-Risk) and you _will_ have more issues with [Coordination Risk](../risks/Coordination-Risk.md).
+**Pairing** and **Mobbing** as mitigations to [Coordination Risk](/tags/Coordination-Risk) are easiest when developers are together in the same room. But it doesn't always work out like this. Teams spread in different locations and timezones naturally don't have the same [communication bandwidth](/tags/Communication-Risk) and you _will_ have more issues with [Coordination Risk](/tags/Coordination-Risk).
In the extreme, I've seen situations where the team at one location has decided to "suck up" the extra development effort themselves rather than spend time trying to bring a new remote team up-to-speed. More common is for one location to do the development, while another gets the [Support](Support) duties.
-When this happens, it's because somehow the team feel that [Coordination Risk](../risks/Coordination-Risk.md) is more unmanageable than [Schedule Risk](../risks/Scarcity-Risk.md#schedule-risk).
+When this happens, it's because somehow the team feel that [Coordination Risk](/tags/Coordination-Risk) is more unmanageable than [Schedule Risk](/tags/Schedule-Risk).
-There are some mitigations here: video-chat, moving staff from location-to-location for face-time, frequent [show-and-tell](Review.md), or simply modularizing accross geographic boundaries, in respect of [Conway's Law](../risks/Coordination-Risk.md):
+There are some mitigations here: video-chat, moving staff from location-to-location for face-time, frequent [show-and-tell](/tags/Review), or simply modularizing accross geographic boundaries, in respect of [Conway's Law](/tags/Coordination-Risk):
> "organizations which design systems ... are constrained to produce designs which are copies of the communication structures of these organizations." - _[M. Conway](https://en.wikipedia.org/wiki/Conways_law)_
diff --git a/docs/practices/External-Relations/Support.md b/docs/practices/External-Relations/Support.md
new file mode 100644
index 000000000..f46c96686
--- /dev/null
+++ b/docs/practices/External-Relations/Support.md
@@ -0,0 +1,20 @@
+---
+title: Support
+description: xxx
+tags:
+ - Support
+ - Practice
+featured:
+ class: c
+ element: 'Support'
+
+---
+
+
+## Description
+
+tbc
+
+## See Also
+
+
diff --git a/docs/practices/Glossary-Of-Practices.md b/docs/practices/Glossary-Of-Practices.md
index f01eec674..4a5477d3a 100644
--- a/docs/practices/Glossary-Of-Practices.md
+++ b/docs/practices/Glossary-Of-Practices.md
@@ -7,355 +7,6 @@ featured:
sidebar_position: 10
---
-# Glossary
+# Glossary of Practices
-### Abstraction (Using)
-
-> "An abstraction" is the outcome of this process—a concept that acts as a common noun for all subordinate concepts and connects any related concepts as a group, field, or category.[1]
-
-Changing abstractions is often known as _refactoring_. Removing unnecessary or outdated abstractions is often called _simplification_.
-
-See:
-
- - [Refactoring](../risks/Complexity-Risk.md#refactoring)
- - [The Power of Abstractions](../risks/Staging-And-Classifying.md#the-power-of-abstractions)
-
-### Scrum
-
-Agile development methodology.
-
-See:
- - [Fixing Scrum](../estimating/Fixing-Scrum.md)
- - [On Story Points](../estimating/On-Story-Points.md)
-
-### Backlog Refinement
-
-> "Backlog refinement is a process by which team members revise and prioritize a backlog for future sprints.[31] It can be done as a separate stage done before the beginning of a new sprint or as a continuous process that team members work on by themselves. Backlog refinement can include the breaking down of large tasks into smaller and clearer ones, the clarification of success criteria, and the revision of changing priorities and returns. " - [Scrum, _Wikipedia_]https://en.wikipedia.org/wiki/Scrum_(software_development)#Backlog_refinement
-
-See:
-
- - [Tracking Risks](../thinking/Track-Risk.md#visualising-risks)
- - Scrum(#scrum)
-
-### Standardization
-
-Given multiple, different, competing abstractions, an effort to pick a single one and try to persuade everyone to adopt it.
-
-See:
-
-- [Unwritten Software (Software Dependency Risk)](../risks/Software-Dependency-Risk.md#unwritten-software)
-
-### Modularisation
-
-Breaking code up into different subsystems with limited, often well-defined interfaces to interact with them.
-
-See:
-
- - [Hierarchies and Modularisation](../risks/Complexity-Risk.md#hierarchies-and-modularisation)
-
-### Dependency (Using a)
-
-Making use of _libraries_, _services_ or _languages_ to solve a particular programming challenge.
-
-See:
-
- - [Languages and Dependencies](../risks/Complexity-Risk.md#languages-and-dependencies)
- - [Software Libraries (Software Dependency Risk)](../risks/Software-Dependency-Risk.md#2-software-libraries)
- - [Software-as-a-Service (Software Dependency Risk)](../risks/Software-Dependency-Risk.md#3--software-as-a-service)
-
-### Process Introduction
-
-The attempt to formalize the interface for accessing a scarce or controlled resource.
-
-See:
-
-- [The Purpose of Process](../risks/Process-Risk.md#the-purpose-of-process)
-
-### Automation
-
-Converting a manual [Process](../risks/Process-Risk) into an automatic, machine-controlled one.
-
-See:
-
- - [Automation (Meeting Reality)](../thinking/Meeting-Reality.md#example-automation)
-
-### Compilation
-
-Converting source code into executable binary code. Also involves checking the consistency and internal logic of the source code.
-
-See:
-
- - [Time/Reality Tradeoff](../thinking/Cadence.md#time--reality-trade-off)
-
-### Communication
-
-> “The fundamental problem of communication is that of reproducing at one point, either exactly or approximately, a message selected at another point.”
-
-Includes the sub-actions or motivation, composition, encoding, transmission, reception, decoding, interpretation and reconciliation.
-
-See:
-
- - [Communication Risk](/tags/Communication-Risk)
-
-### Translation
-
-Converting from one communication protocol to another. Can be called a _bridge_.
-
-See:
-
- - [Ecosystem Bridges](../risks/Boundary-Risk.md#ecosystem-bridges)
-
-### Blaming
-
-The act of shifting burden of responsibility onto someone else.
-
-_See: [Risk-First Diagrams](../thinking/Risk-First-Diagrams.md#example-blaming-others)_
-
-### Requirements Capture
-
-Talking to stakeholders in order to understand what issues they need a piece of software to solve. Often seen as an important step in building an Internal Model of a problem area before tackling a [Specification](#specification).
-
-Also known as _market research_.
-
-See:
-
- - [Waterfall (One Size Fits No One)](thinking/One-Size-Fits-No-One.md)
-
-### Environmental Scanning
-
-Understanding the operational environment of a system in order to anticipate future problems.
-
-See:
-
- - [Environmental Scanning](../risks/Operational-Risk.md#scanning-the-operational-context)
-
-
-### Market Segmentation
-
-A process by which you divide the addressable market of users for a piece of software into different types or _personas_ in order that you can tackle the requirements of a single group in isolation.
-
-See:
-
- - [Feature Access Risk](../risks/Feature-Risk.md#feature-access-risk)
-
-### Innovation
-
-The process of evolving a product to better manage existing or newly discovered [Feature Risk](/tags/Feature-Risk). Also called _evolution_ (often when talking about ecosystems or organisms), _extension_ (when applied to standards or protocols) or _improvement_.
-
-See:
- - [Feature Drift Risk](../risks/Feature-Risk.md#feature-drift-risk)
-
-
-### Marketing
-
-The process of trying to make your product visible and attractive to potential customers by communicating the benefits to them.
-
-> The strategic functions involved in identifying and appealing to particular groups of consumers, often including activities such as advertising, branding, pricing, and sales.
-
-See:
-
- - [Market Risk](../risks/Feature-Risk.md#market-risk)
-
-### Specification
-
-Writing a specification to describe how a piece of functionality should best mitigate [Feature Risk](/tags/Feature-Risk). Also known as _design_.
-
-See:
- - [Development Process](../thinking/Development-Process.md#a-toy-process)
- - [Waterfall (One Size Fits No One)](thinking/One-Size-Fits-No-One.md)
-
-### Redundancy / Horizontal Scaling
-
-Introducing duplication of workers or other resources in order to mitigate single-points-of-failure.
-
-See:
-
-- [Reliability Risk](../risks/Dependency-Risk.md#reliability-risk)
-
-### Reserving (or Buffering)
-
-Creating _reserves_ or _buffers_ of a scarce resource in order to deal with peaks in demand that would deplete the resource.
-
-See:
-
-- [Reliability Risk](../risks/Dependency-Risk.md#reliability-risk)
-
-### Graceful Degredation
-
-Handing lack of a scarce resource by failing in a tolerable way.
-
-See:
-
-- [Scarcity Risk](../risks/Scarcity-Risk.md#mitigations)
-
-### Pair Programming
-
-Two people at the same keyboard, learning and working together.
-
- - [Crisis Mode](../thinking/Crisis-Mode.md)
-
-
-### Pools / Queues
-
-A way of ensuring orderly consumption of scarce resources.
-
-See:
-
-- [Scarcity Risk](../risks/Scarcity-Risk.md#mitigations)
-
-
-### Product Development
-
-Mitigating [Feature Risk](/tags/Feature-Risk) by adding code to a project. Can often be called _coding_ or _implementation_.
-
-_See: [Development Process](../thinking/Development-Process.md#a-toy-process)_
-
-### Integration
-
-Combining different versions of a codebase together to check for consistency. Also called [Continuous Integration](https://en.wikipedia.org/wiki/Continuous_integration).
-
-See:
-- [Development Process](../thinking/Development-Process.md#a-toy-process)_
-- [Production (Cadence)](../thinking/Cadence.md#production)
-
-### Beta Test
-
-Beta testing is the process of testing an unreleased piece of software with a portion of its intended audience.
-
-See:
-
- - [Consider Payoff](../thinking/Consider-Payoff.md)
-
-### User Acceptance Testing (UAT)
-
-Completing a [Feedback Loop](../thinking/Cadence.md) in order to ascertain whether [Feature Risk](/tags/Feature-Risk) has been correctly addressed by new features. Also called _verification_, _user feedback_ or _manual testing_.
-
-See:
- - [Development Process](../thinking/Development-Process.md#a-toy-process)_
- - [User Acceptance Testing (Meeting Reality)](../thinking/Meeting-Reality.md#example-user-acceptance-testing-uat)
- - [Manual Testing](../thinking/Cadence.md#development-cycle-time)
- - [Waterfall (One Size Fits No One)](thinking/One-Size-Fits-No-One.md)
-
-### Release
-
-The act of moving in-development software to being in production, so that clients can make use of it.
-
-See:
-- [Development Process](../thinking/Development-Process.md#a-toy-process)
-- [Consider Payoff](../thinking/Consider-Payoff.md#example-4-continue-testing-or-release)
-- [Production (Cadence)](../thinking/Cadence.md#production)
-
-### Testing
-
-See:
-
- - [Unit testing](#unit-testing)
- - [User Acceptance Testing (UAT)](#user-acceptance-testing-uat)
-
-### Security Testing
-
-Performing tests to evaluate the security of a given system. May include _penetration testing_, for example.
-
-See:
-
-- [Penetration Testing](../risks/Operational-Risk.md#scanning-the-operational-context)
-
-### Experimentation
-
-Improving your internal model by testing (or playing with) components of the real world context. For example, building a _spike solution_.
-
-See:
-
-- [Spike Solution (Coding Bets)](../bets/Coding-Bets.md#spike-solutions-a-new-technology-bet)
-
-### Unit Testing
-
-Writing extra (test) code in the project which can automatically check that the (main, business logic) code works correctly. Used to mitigate [Regression Risk](../risks/Feature-Risk.md#regression-risk) and [Implementation Risk](../risks/Feature-Risk.md#implementation-risk) in a short-cycle feedback loop.
-
-See:
- - [Development Process](../thinking/Development-Process.md#a-toy-process)
- - [Unit Testing (Meeting Reality)](../thinking/Meeting-Reality.md#example-automation)
-
-### Operation / Maintenance
-
-Maintaining running software in production so that it is available for clients to use.
-
-See:
- - [Operational Risk](../risks/Operational-Risk.md)
- - [Waterfall (One Size Fits No One)](thinking/One-Size-Fits-No-One.md)
-
-### Sign-Off
-
-The act of introducing a human-controlled approval step into a [Process](/tags/Process-Risk) in order to mitigate [Operational Risks](../risks/Operational-Risk.md).
-
-See:
-
-- [Processes, Sign-Offs and Agency Risk](../risks/Process-Risk.md#processes-sign-offs-and-agency-risk)_
-
-### Fund Raising / Borrowing
-
-Dealing with a resource shortage by borrowing the resource from another agent temporarily.
-
-See:
-
- - [Funding Risk](../risks/Scarcity-Risk.md#funding-risk)
-
-### Monitoring
-
-Keeping track on systems (agents) to ensure they are operating correctly. Also includes _detecting failure_ and if done periodically (by another party) _audit_.
-
-See:
-
- - [Monitoring](../risks/Agency-Risk.md#monitoring)
- - [Control](../risks/Operational-Risk.md#control)
-
-### Security
-
-Within a given context, limiting the actions of agents to perform certain privileged tasks.
-
-See:
-
-- [Security](/tags/Agency-Risk)
-
-### Accountability (Assigning)
-
-Making an individual or team responsible for some given goal / risk. Perhaps involving some _skin in the game_. Compare _individual_ responsibility with _collective_ responsibility.
-
-See:
-
- - [Goal Alignment](../risks/Agency-Risk.md#goal-alignment)
-
-### Controlling
-
-Keeping a _process_ performing within some acceptable parameters to achieve a goal or mitigate a risk.
-
-See:
-
- - [Operations Management](../risks/Operational-Risk.md#operations-management)
-
-### Planning
-
-Using an [Internal Model](../thinking/Glossary#internal-model) to project forward to some desirable future outcome, and proposing a route across the imagined risk landscape to get there.
-
-various types of planning exist such _supply chain management_, _dependency management_, _project planning_, _just in time_ or _capacity planning_.
-
-See:
-
- - [Operations Management](../risks/Operational-Risk.md#operations-management)
- - [Planning](../risks/Operational-Risk.md#planning)
-
-### Dog-Fooding
-
-_Eating your own dog food_ is the process by which you use your product or service internally in order to get more feedback about how well it works.
-
-See:
-
- - [Consider Payoff](../thinking/Consider-Payoff.md)
-
-### Feature Toggle
-
-A condition within the code enables or disables a feature during runtime, perhaps for a certain group of users.
-
-See:
-
- - [Consider Payoff](../thinking/Consider-Payoff.md)
\ No newline at end of file
+
diff --git a/docs/practices/Planning-And-Management/Approvals.md b/docs/practices/Planning-And-Management/Approvals.md
index 05cb1a1a0..39865996d 100644
--- a/docs/practices/Planning-And-Management/Approvals.md
+++ b/docs/practices/Planning-And-Management/Approvals.md
@@ -43,6 +43,11 @@ practice:
Approval / Sign Off in software development involves getting formal approval from stakeholders at various stages of the project. This practice ensures that the work meets the required standards and specifications before progressing to the next phase, providing a formal communication of acceptance and readiness.
+See:
+
+- [Processes, Sign-Offs and Agency Risk](/risks/Process-Risk.md#processes-sign-offs-and-agency-risk)_
+
+
## See Also
diff --git a/docs/practices/Planning-And-Management/Delegation.md b/docs/practices/Planning-And-Management/Delegation.md
new file mode 100644
index 000000000..a1d7c296b
--- /dev/null
+++ b/docs/practices/Planning-And-Management/Delegation.md
@@ -0,0 +1,50 @@
+---
+title: Delegation
+description: The practice of assigning responsibility and authority to others to carry out specific activities or tasks.
+tags:
+ - Planning-Management
+ - Delegation
+featured:
+ class: c
+ element: 'Delegation'
+practice:
+ aka:
+ - "Task Assignment"
+ - "Empowerment"
+ - "Authority Delegation"
+ - "Responsibility Allocation"
+ mitigates:
+ - tag: Resource Risk
+ reason: "Ensures optimal utilization of team members' skills and capabilities."
+ - tag: Schedule Risk
+ reason: "Distributes workload effectively, helping to meet deadlines."
+ - tag: Focus Risk
+ reason: "Allows leaders to focus on higher-priority tasks by delegating routine work."
+ attendant:
+ - tag: Control Risk
+ reason: "Can lead to a loss of control over task execution and quality."
+ - tag: Accountability Risk
+ reason: "Responsibility for outcomes can become unclear, leading to accountability issues."
+ - tag: Communication Risk
+ reason: "Requires clear communication to ensure tasks are understood and executed properly."
+ related:
+ - ../Planning-And-Management/Prioritising
+ - ../Collaboration-And-Communication/Stakeholder-Management
+---
+
+
+
+## Description
+
+> "Delegation is the assignment of any responsibility or authority to another person to carry out specific activities. It is one of the core concepts of management leadership." - [Delegation, _Wikipedia_](https://en.wikipedia.org/wiki/Delegation)
+
+Delegation involves assigning responsibility and authority to others to carry out specific activities or tasks. This practice is essential for efficient management and helps in optimizing resource utilization, distributing workload, and allowing leaders to focus on higher-priority tasks. Effective delegation requires clear communication and proper accountability to ensure tasks are executed correctly and objectives are met.
+
+See:
+
+ - [Goal Alignment](/risks/Agency-Risk.md#goal-alignment)=
+ - [Risk-First Diagrams](/thinking/Risk-First-Diagrams.md#example-blaming-others)
+
+## See Also
+
+
diff --git a/docs/practices/Planning-And-Management/Design.md b/docs/practices/Planning-And-Management/Design.md
index 175a3699d..0b53ba27e 100644
--- a/docs/practices/Planning-And-Management/Design.md
+++ b/docs/practices/Planning-And-Management/Design.md
@@ -47,7 +47,7 @@ Architecture / Design in software development involves creating the high-level s
Design is what you do every time you think of an action to mitigate a risk. And **Big Design Up Front** is where you do a lot of it in one go, for example:
- Where you think about the design of all (or a set of) the requirements in one go, in advance.
- - Where you consider a _set of [Attendant Risks](../thinking/Glossary.md#attendant-risk)_ all at the same time.
+ - Where you consider a _set of [Attendant Risks](/thinking/Glossary.md#attendant-risk)_ all at the same time.
Compare with "little" design, where we consider just the _next_ requirement, or the _most pressing_ risk.
@@ -55,11 +55,11 @@ Although it's fallen out of favour in Agile methodologies, there are benefits to
## How It Works
-As we saw in [Meet Reality](../thinking/Meeting-Reality.md), "Navigating the [Risk Landscape](../risks/Risk-Landscape.md)", meant going from a position of high risk, to a position of lower risk. [Agile Design](Agile) is much like [Gradient Descent](https://en.wikipedia.org/wiki/Gradient_descent): each day, one small step after another _downwards in risk_ on the [Risk Landscape](../risks/Risk-Landscape.md).
+As we saw in [Meet Reality](/thinking/Meeting-Reality.md), "Navigating the [Risk Landscape](/risks/Risk-Landscape.md)", meant going from a position of high risk, to a position of lower risk. [Agile Design](Agile) is much like [Gradient Descent](https://en.wikipedia.org/wiki/Gradient_descent): each day, one small step after another _downwards in risk_ on the [Risk Landscape](/risks/Risk-Landscape.md).
But the problem with this is you can get trapped in a [Local Minima](https://en.wikipedia.org/wiki/Maximum_and_minimum#Search), where there are _no_ easy steps to take to get you to where you want to be.
-In these cases, you have to _widen your horizon_ and look at where you want to go: and this is the process of _design_. You're not necessarily now taking steps on the [Risk Landscape](../risks/Risk-Landscape.md), but imagining a place on the [Risk Landscape](../risks/Risk-Landscape.md) where you want to be, and checking it against your [Internal Model](../thinking/Glossary.md#internal-model) for validity.
+In these cases, you have to _widen your horizon_ and look at where you want to go: and this is the process of _design_. You're not necessarily now taking steps on the [Risk Landscape](/risks/Risk-Landscape.md), but imagining a place on the [Risk Landscape](/risks/Risk-Landscape.md) where you want to be, and checking it against your [Internal Model](/thinking/Glossary.md#internal-model) for validity.
## See Also
diff --git a/docs/practices/Planning-And-Management/Meeting.md b/docs/practices/Planning-And-Management/Meeting.md
new file mode 100644
index 000000000..d205acac3
--- /dev/null
+++ b/docs/practices/Planning-And-Management/Meeting.md
@@ -0,0 +1,45 @@
+---
+title: Meeting
+description: The practice of gathering team members to discuss project progress, address issues, and make decisions.
+tags:
+ - Collaboration
+ - Communication
+featured:
+ class: c
+ element: 'Meeting'
+practice:
+ aka:
+ - "Team Meeting"
+ - "Stand-up Meeting"
+ - "Status Meeting"
+ - "Sprint Planning"
+ mitigates:
+ - tag: Communication Risk
+ reason: "Facilitates clear and direct communication among team members."
+ - tag: Alignment Risk
+ reason: "Ensures everyone is on the same page regarding project goals and progress."
+ - tag: Issue Management Risk
+ reason: "Provides a platform to identify, discuss, and resolve issues promptly."
+ attendant:
+ - tag: Time Management Risk
+ reason: "Can consume a significant amount of time if not managed effectively."
+ - tag: Participation Risk
+ reason: "Risk of not having all relevant team members present or engaged."
+ - tag: Decision-Making Risk
+ reason: "Decisions may be delayed if consensus is not reached during the meeting."
+ related:
+ - ../Collaboration-And-Communication/Stakeholder-Management
+ - ../Planning-And-Management/Prioritising
+---
+
+
+
+## Description
+
+> "A meeting is an assembly of people for a particular purpose, especially for formal discussion." - [Meeting, _Wikipedia_](https://en.wikipedia.org/wiki/Meeting)
+
+Meetings are essential for effective team collaboration and communication. They provide a structured environment for discussing project progress, addressing issues, making decisions, and ensuring alignment among team members. Regular meetings help in maintaining transparency and keeping everyone informed about the project's status and any changes.
+
+## See Also
+
+
\ No newline at end of file
diff --git a/docs/practices/Planning-And-Management/Prioritising.md b/docs/practices/Planning-And-Management/Prioritising.md
index b165dab72..c870a1756 100644
--- a/docs/practices/Planning-And-Management/Prioritising.md
+++ b/docs/practices/Planning-And-Management/Prioritising.md
@@ -61,11 +61,17 @@ Usually, risk is mitigated by **Prioritisation**. But sometimes, it's not appro
There are several ways you can prioritise work:
-- **Largest Mitigation First**: What's the thing we can do right now to reduce our [Attendant Risk](../thinking/Glossary.md#attendant-risk) most? This is sometimes hard to quantify, given [Hidden Risk](../thinking/Glossary.md#hidden-risk), so maybe an easier metric is...
-- **Biggest Win**: What's the best thing we can do right now to reduce [Attendant Risk](../thinking/Glossary.md#attendant-risk) for least additional [Schedule-Risk](../risks/Scarcity-Risk.md#schedule-risk)? (i.e. simply considering how much *work* is likely to be involved)
-- **Dependency Order**: Sometimes, you can't build Feature A until Feature B is complete. Prioritisation helps to identify and mitigate [Dependency Risk](../risks/Dependency-Risk.md).
+- **Largest Mitigation First**: What's the thing we can do right now to reduce our [Attendant Risk](/thinking/Glossary.md#attendant-risk) most? This is sometimes hard to quantify, given [Hidden Risk](/thinking/Glossary.md#hidden-risk), so maybe an easier metric is...
+- **Biggest Win**: What's the best thing we can do right now to reduce [Attendant Risk](/thinking/Glossary.md#attendant-risk) for least additional [Schedule-Risk](/tags/Schedule-Risk)? (i.e. simply considering how much *work* is likely to be involved)
+- **Dependency Order**: Sometimes, you can't build Feature A until Feature B is complete. Prioritisation helps to identify and mitigate [Dependency Risk](/tags/Dependency-Risk).
-By prioritising, you get to [Meet Reality](../thinking/Meeting-Reality.md) _sooner_ and _more frequently_ and in _small chunks_.
+By prioritising, you get to [Meet Reality](/thinking/Meeting-Reality.md) _sooner_ and _more frequently_ and in _small chunks_.
+
+
+See:
+ - [Operations Management](/risks/Operational-Risk.md#operations-management)
+ - [Planning](/risks/Operational-Risk.md#planning)
+ - [Tracking Risks](/thinking/Track-Risk.md#visualising-risks)
## See Also
diff --git a/docs/practices/Start.md b/docs/practices/Start.md
index 15d2e4f10..b8782dbbb 100644
--- a/docs/practices/Start.md
+++ b/docs/practices/Start.md
@@ -2,6 +2,10 @@
title: Practices
description: Discussion Of Software Development Practices
+tags:
+ - Front
+sidebar_position: 4
+
featured:
class: bg1
diff --git a/docs/practices/Testing-and-Quality-Assurance/Automated-Testing.md b/docs/practices/Testing-and-Quality-Assurance/Automated-Testing.md
index e915a9e28..8059ee6d1 100644
--- a/docs/practices/Testing-and-Quality-Assurance/Automated-Testing.md
+++ b/docs/practices/Testing-and-Quality-Assurance/Automated-Testing.md
@@ -41,6 +41,11 @@ practice:
Unit testing involves writing and running tests for individual units or components of the software to ensure they function as expected. This practice helps in identifying and fixing issues early in the development process, making the codebase more reliable and maintainable.
+
+See:
+ - [Development Process](/thinking/Development-Process.md#a-toy-process)
+ - [Unit Testing (Meeting Reality)](/thinking/Meeting-Reality.md#example-automation)
+
## See Also
diff --git a/docs/practices/Testing-and-Quality-Assurance/Integration-Testing.md b/docs/practices/Testing-and-Quality-Assurance/Integration-Testing.md
index c2c0d5831..dd92705cb 100644
--- a/docs/practices/Testing-and-Quality-Assurance/Integration-Testing.md
+++ b/docs/practices/Testing-and-Quality-Assurance/Integration-Testing.md
@@ -38,6 +38,10 @@ practice:
Integration Testing involves testing combined parts of the software to ensure they work together correctly. This practice helps in identifying and fixing issues that arise when individual components interact, ensuring that the overall system functions as intended.
+See:
+- [Development Process](/thinking/Development-Process.md#a-toy-process)_
+- [Production (Cadence)](/thinking/Cadence.md#production)
+
## See Also
diff --git a/docs/practices/Testing-and-Quality-Assurance/Regression-Testing.md b/docs/practices/Testing-and-Quality-Assurance/Regression-Testing.md
index c290bb8b0..74db2b219 100644
--- a/docs/practices/Testing-and-Quality-Assurance/Regression-Testing.md
+++ b/docs/practices/Testing-and-Quality-Assurance/Regression-Testing.md
@@ -107,7 +107,7 @@ If none of the other issues warn you against regression testing, this should be
One of the biggest problems is that, eventually, it’s probably too much trouble. You have to get both systems up and running at the same time, with the same input data, and deterministic services, and you might have to access the production systems for this, and then get the data out of them, and then run the diff tool and eyeball the numbers. You’ll probably have to clone databases so that A* has the same data as A. You’ll probably have to do that every time you run it as A is a live system...
-Regression testing _seems like_ it's going to be a big win. Sometimes, if you're lucky, it might be. But at least now you can see some of the [Hidden Risks](../thinking/Glossary.md#hidden-risk) associated with it.
+Regression testing _seems like_ it's going to be a big win. Sometimes, if you're lucky, it might be. But at least now you can see some of the [Hidden Risks](/thinking/Glossary.md#hidden-risk) associated with it.
Although [Acceptance Tests](Testing) seem like a harder option, they are much easier to debug, and are probably what you really need: what they tend to do though is surface problems in the original system that you didn't want to fix. But, is that a bad thing?
diff --git a/docs/practices/Testing-and-Quality-Assurance/Security-Testing.md b/docs/practices/Testing-and-Quality-Assurance/Security-Testing.md
index 479579fbd..43f3ab7d6 100644
--- a/docs/practices/Testing-and-Quality-Assurance/Security-Testing.md
+++ b/docs/practices/Testing-and-Quality-Assurance/Security-Testing.md
@@ -40,6 +40,9 @@ practice:
Security Testing involves assessing the security of software applications to identify vulnerabilities and ensure they are protected against threats and attacks. This practice is essential for maintaining the integrity, confidentiality, and availability of software systems.
+See:
+ - [Penetration Testing](/risks/Operational-Risk.md#scanning-the-operational-context)
+
## See Also
diff --git a/docs/practices/Testing-and-Quality-Assurance/User-Acceptance-Testing.md b/docs/practices/Testing-and-Quality-Assurance/User-Acceptance-Testing.md
index 10d1d2140..84dfcfea8 100644
--- a/docs/practices/Testing-and-Quality-Assurance/User-Acceptance-Testing.md
+++ b/docs/practices/Testing-and-Quality-Assurance/User-Acceptance-Testing.md
@@ -13,6 +13,8 @@ practice:
- "Client Acceptance Testing"
- "Customer Validation"
- UAT
+ - Beta Testing
+ - Dogfooding
mitigates:
- tag: Feature-Fit Risk
reason: "Ensures that the software meets the client's requirements and expectations."
@@ -41,6 +43,13 @@ practice:
User Acceptance Testing (UAT) involves having end users test the software to ensure it meets their requirements and expectations. This practice helps in identifying any issues that may not have been caught during previous testing phases and ensures that the final product is user-friendly and functional.
+See:
+ - [Consider Payoff](/thinking/Consider-Payoff.md)
+ - [Development Process](/thinking/Development-Process.md#a-toy-process)_
+ - [User Acceptance Testing (Meeting Reality)](/thinking/Meeting-Reality.md#example-user-acceptance-testing-uat)
+ - [Manual Testing](/thinking/Cadence.md#development-cycle-time)
+ - [Waterfall (One Size Fits No One)](thinking/One-Size-Fits-No-One.md)
+
## See Also
diff --git a/docs/practices/todo.txt b/docs/practices/todo.txt
index 10165806f..5e4b9ff6d 100644
--- a/docs/practices/todo.txt
+++ b/docs/practices/todo.txt
@@ -33,4 +33,12 @@ AFTER MERGE
Consider_payoff should use "Bets" tag.
+
+New Practices Introduced While Merging Glossaries
+
+Standardisation
+Meetings
+Demand Management
+Delegation
+
\ No newline at end of file
diff --git a/docs/presentations/Start.md b/docs/presentations/Start.md
index 6a9a69f38..c6d696ec8 100644
--- a/docs/presentations/Start.md
+++ b/docs/presentations/Start.md
@@ -10,7 +10,7 @@ layout: categories
cat: Presentations
tags:
- Front
-sidebar_position: 7
+sidebar_position: 9
tweet: yes
---
diff --git a/docs/presentations/_category_.yaml b/docs/presentations/_category_.yaml
index c2fc0132f..aab940fdf 100644
--- a/docs/presentations/_category_.yaml
+++ b/docs/presentations/_category_.yaml
@@ -1,4 +1,4 @@
-position: 8
+position: 9
label: 'Presentations'
link:
type: doc
diff --git a/docs/risks/A-Pattern-Language.md b/docs/risks/A-Pattern-Language.md
index 8035a99eb..86d9ed06c 100644
--- a/docs/risks/A-Pattern-Language.md
+++ b/docs/risks/A-Pattern-Language.md
@@ -12,7 +12,7 @@ tweet: yes
# A Pattern Language
-Risk-First is not intended to be a rigorous, scientific work: I don't believe it's possible to objectively analyze a field like software development in any meaningful, statistically significant way (things just change [too fast](../complexity/Silver-Bullets.md)).
+Risk-First is not intended to be a rigorous, scientific work: I don't believe it's possible to objectively analyze a field like software development in any meaningful, statistically significant way things just change too fast.
Does that diminish it? If you have visited the [TVTropes](https://tvtropes.org) website, you'll know that it's a set of web-pages describing _common patterns_ of narrative, production, character design etc. to do with fiction.
diff --git a/docs/risks/Communication-Risks/Channel-Risk.md b/docs/risks/Communication-Risks/Channel-Risk.md
index dae9dd6a5..6f6a6a778 100644
--- a/docs/risks/Communication-Risks/Channel-Risk.md
+++ b/docs/risks/Communication-Risks/Channel-Risk.md
@@ -28,15 +28,15 @@ Shannon discusses that no channel is perfect: there is always the **risk of noi
![Communication Channel Risk](/img/generated/risks/communication/communication_channel_risks.png)
-But channel risk goes wider than just this mathematical example: messages might be delayed or delivered in the wrong order, or not be acknowledged when they do arrive. Sometimes, a channel is just an inappropriate way of communicating. When you work in a different time-zone to someone else on your team, there is _automatic_ [Channel Risk](Communication-Risk.md#channel-risk), because instantaneous communication is only available for a few hours a day.
+But channel risk goes wider than just this mathematical example: messages might be delayed or delivered in the wrong order, or not be acknowledged when they do arrive. Sometimes, a channel is just an inappropriate way of communicating. When you work in a different time-zone to someone else on your team, there is _automatic_ [Channel Risk](/tags/Channel-Risk), because instantaneous communication is only available for a few hours a day.
-When channels are **poor-quality**, less communication occurs. People will try to communicate just the most important information. But, it's often impossible to know a-priori what constitutes "important". This is why [Extreme Programming](https://en.wikipedia.org/wiki/Extreme_programming) recommends the practices of [Pair Programming](https://en.wikipedia.org/wiki/Pair_programming) and grouping all the developers together: although you don't know whether useful communication will happen, you are mitigating [Channel Risk](Communication-Risk.md#channel-risk) by ensuring high-quality communication channels are in place.
+When channels are **poor-quality**, less communication occurs. People will try to communicate just the most important information. But, it's often impossible to know a-priori what constitutes "important". This is why [Extreme Programming](https://en.wikipedia.org/wiki/Extreme_programming) recommends the practices of [Pair Programming](https://en.wikipedia.org/wiki/Pair_programming) and grouping all the developers together: although you don't know whether useful communication will happen, you are mitigating [Channel Risk](/tags/Channel-Risk) by ensuring high-quality communication channels are in place.
At other times channels are crowded and can contain so much information that we can't hope to receive all the messages. In these cases we don't even observe the whole channel, just parts of it.
### Marketing Communications
-When we are talking about a product or a brand, mitigating [Channel Risk](Communication-Risk.md#channel-risk) is the domain of [Marketing Communications](https://en.wikipedia.org/wiki/Marketing_communications). How do you ensure that the information about your (useful) project makes it to the right people? How do you address the right channels?
+When we are talking about a product or a brand, mitigating [Channel Risk](/tags/Channel-Risk) is the domain of [Marketing Communications](https://en.wikipedia.org/wiki/Marketing_communications). How do you ensure that the information about your (useful) project makes it to the right people? How do you address the right channels?
This works both ways. Let's looks at some of the **Channel Risks** from the point of view of a hypothetical software tool, **D**, which my team would find really useful:
@@ -49,4 +49,4 @@ This works both ways. Let's looks at some of the **Channel Risks** from the poi
![Marketing Communication](/img/generated/risks/communication/communication_marketing.png)
-[Internal Models](../thinking/Glossary.md#internal-model) don't magically get populated with the information they need: they fill up gradually, as shown in the diagram above. Popular products and ideas _spread_, by word-of-mouth or other means. Part of the job of being a good technologist is to keep track of new **Ideas**, **Concepts** and **Options**, so as to use them as [Dependencies](Dependency-Risk.md) when needed.
+[Internal Models](/thinking/Glossary.md#internal-model) don't magically get populated with the information they need: they fill up gradually, as shown in the diagram above. Popular products and ideas _spread_, by word-of-mouth or other means. Part of the job of being a good technologist is to keep track of new **Ideas**, **Concepts** and **Options**, so as to use them as [Dependencies](/tags/Dependency-Risk) when needed.
diff --git a/docs/risks/Communication-Risks/Communication-Risk.md b/docs/risks/Communication-Risks/Communication-Risk.md
index f1d609709..af2a29339 100644
--- a/docs/risks/Communication-Risks/Communication-Risk.md
+++ b/docs/risks/Communication-Risks/Communication-Risk.md
@@ -25,14 +25,14 @@ part_of: Operational Risk
# Communication Risk
-If we all had identical knowledge, there would be no need to do any communicating at all, and therefore no [Communication Risk](Communication-Risk.md).
+If we all had identical knowledge, there would be no need to do any communicating at all, and therefore no [Communication Risk](/tags/Communication-Risk).
-But people are not all-knowing oracles. We rely on our _senses_ to improve our [Internal Models](../thinking/Glossary.md#internal-model) of the world. There is [Communication Risk](Communication-Risk.md) here - we might overlook something vital (like an on-coming truck) or mistake something someone says (like "Don't cut the green wire").
+But people are not all-knowing oracles. We rely on our _senses_ to improve our [Internal Models](/thinking/Glossary.md#internal-model) of the world. There is [Communication Risk](/tags/Communication-Risk) here - we might overlook something vital (like an on-coming truck) or mistake something someone says (like "Don't cut the green wire").
So, we are going to go on a journey discovering Communication Risk, covering:
- A look at the four different _stages_ of communication and examples of each in the world of computing.
-- Breaking down [Communication Risk](Communication-Risk.md) as it affects each stage, discussing the types of risks present for each one.
+- Breaking down [Communication Risk](/tags/Communication-Risk) as it affects each stage, discussing the types of risks present for each one.
- The many problems faced in product marketing.
- The concept of _abstraction_ and it's associated [Invisibility Risk](#invisibility-risk).
@@ -46,9 +46,9 @@ In 1948, Claude Shannon proposed this definition of communication:
And from this same paper we get the diagram above: we move from top-left ("I want to send a message to someone"), clockwise to bottom left where we hope the message has been understood and believed. (I've added this last box, _reconciliation_ to Shannon's original diagram.)
-One of the chief concerns in Shannon's paper is the risk of error between **Transmission** and **Reception**. He creates a theory of _information_ (measured in _bits_), sets the upper-bounds of information that can be communicated over a channel, and describes ways in which [Communication Risk](Communication-Risk.md) between these processes can be mitigated by clever **Encoding** and **Decoding** steps.
+One of the chief concerns in Shannon's paper is the risk of error between **Transmission** and **Reception**. He creates a theory of _information_ (measured in _bits_), sets the upper-bounds of information that can be communicated over a channel, and describes ways in which [Communication Risk](/tags/Communication-Risk) between these processes can be mitigated by clever **Encoding** and **Decoding** steps.
-But it's not just transmission. [Communication Risk](Communication-Risk.md) exists at each of these steps. Let's imagine a human example, where someone, **Alice** is trying to send a simple message to **Bob**.
+But it's not just transmission. [Communication Risk](/tags/Communication-Risk) exists at each of these steps. Let's imagine a human example, where someone, **Alice** is trying to send a simple message to **Bob**.
|Step |Potential Risk |
|----------------------|---------------------------------------------------------|
@@ -59,20 +59,20 @@ But it's not just transmission. [Communication Risk](Communication-Risk.md) exi
|Reception | **Bob** doesn't hear the message clearly (maybe there is background noise). |
|Decoding | **Bob** might not decode what was said into a meaningful sentence. |
|Interpretation | Assuming **Bob** _has_ heard, will he correctly **interpret** which type of chips (or chops) **Alice** was talking about? |
-|Reconciliation | Does **Bob** believe the message? Will he **reconcile** the information into his [Internal Model](../thinking/Glossary.md#internal-model) and act on it? Perhaps not, if **Bob** forgets, or thinks that there are chips at home already.|
+|Reconciliation | Does **Bob** believe the message? Will he **reconcile** the information into his [Internal Model](/thinking/Glossary.md#internal-model) and act on it? Perhaps not, if **Bob** forgets, or thinks that there are chips at home already.|
## Approach To Communication Risk
-To get inside [Communication Risk](Communication-Risk.md), we need to understand **Communication** itself, whether between _machines_, _people_ or _products_: although these seem very different, the process involved (and the risks) are the same for each.
+To get inside [Communication Risk](/tags/Communication-Risk), we need to understand **Communication** itself, whether between _machines_, _people_ or _products_: although these seem very different, the process involved (and the risks) are the same for each.
![Communication Risk, broken into four areas](/img/generated/risks/communication/communication_2.png)
-There is a symmetry about the steps going on in Shannon's model and we're going to exploit this in order to break down [Communication Risk](Communication-Risk.md) into four basic _stages_, as shown in the diagram above:
+There is a symmetry about the steps going on in Shannon's model and we're going to exploit this in order to break down [Communication Risk](/tags/Communication-Risk) into four basic _stages_, as shown in the diagram above:
- **[Channels](https://en.wikipedia.org/wiki/Communication_channel)**: the medium via which the communication is happening.
- **[Protocols](https://en.wikipedia.org/wiki/Communication_protocol)**: the systems of rules that allow two or more entities of a communications system to transmit information.
- **[Messages](https://en.wikipedia.org/wiki/Message)**: the information we want to convey.
- - **[Internal Models](../thinking/Glossary.md#internal-model)**: the sources and destinations for the messages. Updating internal models (whether in our heads or machines) is the reason why we're communicating.
+ - **[Internal Models](/thinking/Glossary.md#internal-model)**: the sources and destinations for the messages. Updating internal models (whether in our heads or machines) is the reason why we're communicating.
As we look at these four stages we'll consider the risks of each.
diff --git a/docs/risks/Communication-Risks/Invisibility-Risk.md b/docs/risks/Communication-Risks/Invisibility-Risk.md
index 55f8c752d..f0aebb563 100644
--- a/docs/risks/Communication-Risks/Invisibility-Risk.md
+++ b/docs/risks/Communication-Risks/Invisibility-Risk.md
@@ -14,7 +14,7 @@ part_of: Communication Risk
-Another cost of [Abstraction](../thinking/Glossary.md#abstraction) is [Invisibility Risk](Communication-Risk.md#invisibility-risk). While abstraction is a massively powerful technique, it lets the function of a thing hide behind the layers of abstraction and become invisible.
+Another cost of [Abstraction](/thinking/Glossary.md#abstraction) is [Invisibility Risk](/tags/Invisibility-Risk). While abstraction is a massively powerful technique, it lets the function of a thing hide behind the layers of abstraction and become invisible.
As we saw above, [Protocols](Communication-Risk.md#protocols) allow things like the Internet to happen - this is amazing! But the higher level protocols _hide_ the details of the lower ones. HTTP _didn't know anything about_ IP packets, for example.
@@ -22,11 +22,11 @@ Abstractions hide detail, then. But when they hide from you the details you nee
#### Invisibility Risk In Conversation
-[Invisibility Risk](Communication-Risk.md#invisibility-risk) is risk due to information not sent. Because humans don't need a complete understanding of a concept to use it, we can cope with some [Invisibility Risk](Communication-Risk.md#invisibility-risk) in communication and this saves us time when we're talking. It would be _painful_ to have conversations if, say, the other person needed to understand everything about how cars worked in order to discuss cars.
+[Invisibility Risk](/tags/Invisibility-Risk) is risk due to information not sent. Because humans don't need a complete understanding of a concept to use it, we can cope with some [Invisibility Risk](/tags/Invisibility-Risk) in communication and this saves us time when we're talking. It would be _painful_ to have conversations if, say, the other person needed to understand everything about how cars worked in order to discuss cars.
-For people, [Abstraction](../thinking/Glossary.md#abstraction) is a tool that we can use to refer to other concepts, without necessarily knowing how the concepts work. This divorcing of "what" from "how" is the essence of abstraction and is what makes language useful.
+For people, [Abstraction](/thinking/Glossary.md#abstraction) is a tool that we can use to refer to other concepts, without necessarily knowing how the concepts work. This divorcing of "what" from "how" is the essence of abstraction and is what makes language useful.
-The debt of [Invisibility Risk](Communication-Risk.md#invisibility-risk) comes due when you realise that _not_ being given the details _prevents_ you from reasoning about it effectively. Let's think about this in the context of a project status meeting, for example:
+The debt of [Invisibility Risk](/tags/Invisibility-Risk) comes due when you realise that _not_ being given the details _prevents_ you from reasoning about it effectively. Let's think about this in the context of a project status meeting, for example:
- Can you be sure that the status update contains all the details you need to know?
- Is the person giving the update wrong or lying?
@@ -44,12 +44,12 @@ But something else also happens: by creating **f**, you are saying “I have th
_Referring to **f** is a much simpler job than understanding **f**._
-We try to mitigate this via documentation but this is a terrible deal: documentation is necessarily a simplified explanation of the abstraction, so will still suffer from [Invisibility Risk](Communication-Risk.md#invisibility-risk).
+We try to mitigate this via documentation but this is a terrible deal: documentation is necessarily a simplified explanation of the abstraction, so will still suffer from [Invisibility Risk](/tags/Invisibility-Risk).
-[Invisibility Risk](Communication-Risk.md#invisibility-risk) is mainly [Hidden Risk](../thinking/Glossary.md#hidden-risk). (Mostly, _you don't know what you don't know_.) But you can carelessly _hide things from yourself_ with software:
+[Invisibility Risk](/tags/Invisibility-Risk) is mainly [Hidden Risk](/thinking/Glossary.md#hidden-risk). (Mostly, _you don't know what you don't know_.) But you can carelessly _hide things from yourself_ with software:
- Adding a thread to an application that doesn't report whether it worked, failed, or is running out of control and consuming all the cycles of the CPU.
- Redundancy can increase reliability, but only if you know when servers fail, and fix them quickly. Otherwise, you only see problems when the last server fails.
- When building a web-service, can you assume that it's working for the users in the way you want it to?
-When you build a software service, or even implement a thread, ask yourself: "How will I know next week that this is working properly?" If the answer involves manual work and investigation, then your implementation has just cost you in [Invisibility Risk](Communication-Risk.md#invisibility-risk).
+When you build a software service, or even implement a thread, ask yourself: "How will I know next week that this is working properly?" If the answer involves manual work and investigation, then your implementation has just cost you in [Invisibility Risk](/tags/Invisibility-Risk).
diff --git a/docs/risks/Communication-Risks/Learning-Curve-Risk.md b/docs/risks/Communication-Risks/Learning-Curve-Risk.md
index a96c6f302..dcf821b0c 100644
--- a/docs/risks/Communication-Risks/Learning-Curve-Risk.md
+++ b/docs/risks/Communication-Risks/Learning-Curve-Risk.md
@@ -14,7 +14,7 @@ part_of: Communication Risk
-If the messages we are receiving force us to update our [Internal Model](../thinking/Glossary.md#internal-model) too much, we can suffer from the problem of "too steep a [Learning Curve](https://en.wikipedia.org/wiki/Learning_curve)" or "[Information Overload](https://en.wikipedia.org/wiki/Information_overload)", where the messages force us to adapt our [Internal Model](../thinking/Glossary.md#internal-model) too quickly for our brains to keep up.
+If the messages we are receiving force us to update our [Internal Model](/thinking/Glossary.md#internal-model) too much, we can suffer from the problem of "too steep a [Learning Curve](https://en.wikipedia.org/wiki/Learning_curve)" or "[Information Overload](https://en.wikipedia.org/wiki/Information_overload)", where the messages force us to adapt our [Internal Model](/thinking/Glossary.md#internal-model) too quickly for our brains to keep up.
Commonly, the easiest option is just to ignore the information channel completely in these cases.
@@ -28,6 +28,6 @@ By now it should be clear that it's going to be _both_ quite hard to read and wr
But now we should be able to see the reason why it's harder to read than write too:
- - When reading code, you are having to shift your [Internal Model](../thinking/Glossary.md#internal-model) to wherever the code is, accepting decisions that you might not agree with and accepting counter-intuitive logical leaps. i.e. [Learning Curve Risk](Communication-Risk.md#learning-curve-risk). _(cf. [Principle of Least Surprise](https://en.wikipedia.org/wiki/Principle_of_least_astonishment))_
- - There is no [Feedback Loop](../thinking/Glossary.md#feedback-loop) between your [Internal Model](../thinking/Glossary.md#internal-model) and the [Reality](../thinking/Glossary.md#meet-reality) of the code, opening you up to [misinterpretation](Communication-Risk.md#misinterpretation). When you write code, your compiler and tests give you this.
- - While reading code _takes less time_ than writing it, this also means the [Learning Curve](Communication-Risk.md#learning-curve-risk) is steeper.
\ No newline at end of file
+ - When reading code, you are having to shift your [Internal Model](/thinking/Glossary.md#internal-model) to wherever the code is, accepting decisions that you might not agree with and accepting counter-intuitive logical leaps. i.e. [Learning Curve Risk](/tags/Learning-Curve-Risk). _(cf. [Principle of Least Surprise](https://en.wikipedia.org/wiki/Principle_of_least_astonishment))_
+ - There is no [Feedback Loop](/thinking/Glossary.md#feedback-loop) between your [Internal Model](/thinking/Glossary.md#internal-model) and the [Reality](/thinking/Glossary.md#meet-reality) of the code, opening you up to [misinterpretation](Communication-Risk.md#misinterpretation). When you write code, your compiler and tests give you this.
+ - While reading code _takes less time_ than writing it, this also means the [Learning Curve](/tags/Learning-Curve-Risk) is steeper.
\ No newline at end of file
diff --git a/docs/risks/Communication-Risks/Message-Risk.md b/docs/risks/Communication-Risks/Message-Risk.md
index df61e481d..5acbb5f0e 100644
--- a/docs/risks/Communication-Risks/Message-Risk.md
+++ b/docs/risks/Communication-Risks/Message-Risk.md
@@ -26,7 +26,7 @@ This is called [Theory Of Mind](https://en.wikipedia.org/wiki/Theory_of_mind): t
### Message Risk
-A second, related problem is actually [Dependency Risk](Dependency-Risk.md), which is covered more thoroughly in a later section. Often, to understand a new message, you have to have followed everything up to that point already.
+A second, related problem is actually [Dependency Risk](/tags/Dependency-Risk), which is covered more thoroughly in a later section. Often, to understand a new message, you have to have followed everything up to that point already.
The same **Message Dependency Risk** exists for computer software: if there is replication going on between instances of an application and one of the instances misses some messages, you end up with a "[Split Brain](https://en.wikipedia.org/wiki/Split-brain_(computing))" scenario, where later messages can't be processed because they refer to an application state that doesn't exist. For example, a message saying:
@@ -46,6 +46,6 @@ For people, nothing exists unless we have a name for it. The
> "The famous pipe. How people reproached me for it! And yet, could you stuff my pipe? No, it's just a representation, is it not? So if I had written on my picture “This is a pipe”, I'd have been lying!" - [Rene Magritte, of _The Treachery of Images_](https://en.wikipedia.org/wiki/The_Treachery_of_Images)
-People don't rely on rigorous definitions of abstractions like computers do; we make do with fuzzy definitions of concepts and ideas. We rely on [Abstraction](../thinking/Glossary.md#abstraction) to move between the name of a thing and the _idea of a thing_.
+People don't rely on rigorous definitions of abstractions like computers do; we make do with fuzzy definitions of concepts and ideas. We rely on [Abstraction](/thinking/Glossary.md#abstraction) to move between the name of a thing and the _idea of a thing_.
This brings about [Misinterpretation](Communication-Risk.md#misinterpretation): names are not _precise_, and concepts mean different things to different people. We can't be sure that other people have the same meaning for a name that we have.
diff --git a/docs/risks/Communication-Risks/Protocol-Risk.md b/docs/risks/Communication-Risks/Protocol-Risk.md
index d0969bf0b..d4ed71b48 100644
--- a/docs/risks/Communication-Risks/Protocol-Risk.md
+++ b/docs/risks/Communication-Risks/Protocol-Risk.md
@@ -15,9 +15,9 @@ part_of: Communication Risk
> "A communication protocol is a system of rules that allow two or more entities of a communications system to transmit information. " - [Communication Protocol, Wikipedia](https://en.wikipedia.org/wiki/Communication_protocol)
-In this section I want to examine the concept of [Communication Protocols](https://en.wikipedia.org/wiki/Communication_protocol) and how they relate to [Abstraction](../thinking/Glossary.md#abstraction), which is implicated over and over again in different types of risk we will be looking at.
+In this section I want to examine the concept of [Communication Protocols](https://en.wikipedia.org/wiki/Communication_protocol) and how they relate to [Abstraction](/thinking/Glossary.md#abstraction), which is implicated over and over again in different types of risk we will be looking at.
-[Abstraction](../thinking/Glossary.md#abstraction) means separating the _definition_ of something from the _use_ of something. It's a widely applicable concept, but our example below will be specific to communication, and looking at the abstractions involved in loading a web page.
+[Abstraction](/thinking/Glossary.md#abstraction) means separating the _definition_ of something from the _use_ of something. It's a widely applicable concept, but our example below will be specific to communication, and looking at the abstractions involved in loading a web page.
### Clients and Servers
@@ -44,7 +44,7 @@ http://google.com/preferences
The first thing that happens is that the name `google.com` is _resolved_ by DNS. This means that the browser looks up the domain name `google.com` and gets back an [IP Address](https://en.wikipedia.org/wiki/IP_address). An IP Address is a bit like a postal address, but instead of being the address of a building, it is the address of a particular computer.
-This is an [Abstraction](../thinking/Glossary.md#abstraction): although computers use IP addresses like `216.58.204.78`, I can use a human-readable _name_, `google.com`.
+This is an [Abstraction](/thinking/Glossary.md#abstraction): although computers use IP addresses like `216.58.204.78`, I can use a human-readable _name_, `google.com`.
The address `google.com` doesn't even necessarily resolve to that same address each time: Google serves a lot of traffic so there are multiple servers handling the requests and _they have multiple IP addresses for `google.com`_. But as a user, I don't have to worry about this detail.
@@ -57,7 +57,7 @@ Each packet consists of two things:
- The **IP address**, which tells the network where to send the packet (again, much like you'd write the address on the outside of a parcel).
- The **payload**, the stream of bytes for processing at the destination, like the contents of the parcel.
-But even this concept of "packets" is an [abstraction](../thinking/Glossary.md#abstraction). Although the network understands this protocol, we might be using Wired Ethernet cables, or WiFi, 4G or _something else_ beneath that. You can think of this as analogous to the parcel being delivered on foot, by plane or by car - it doesn't matter to the sender of the parcel!
+But even this concept of "packets" is an [abstraction](/thinking/Glossary.md#abstraction). Although the network understands this protocol, we might be using Wired Ethernet cables, or WiFi, 4G or _something else_ beneath that. You can think of this as analogous to the parcel being delivered on foot, by plane or by car - it doesn't matter to the sender of the parcel!
### 3. 802.11 - WiFi Protocol
@@ -67,7 +67,7 @@ And WiFi is just the first hop. After the WiFi receiver, there will be protocol
### 4. TCP - Transmission Control Protocol
-Another [abstraction](../thinking/Glossary.md#abstraction) going on here is that my browser believes it has a "connection" to the server. This is provided by the TCP protocol.
+Another [abstraction](/thinking/Glossary.md#abstraction) going on here is that my browser believes it has a "connection" to the server. This is provided by the TCP protocol.
But this is a fiction - my "connection" is built on the IP protocol, which as we saw above is just packets of data on the network. So there are lots of packets floating around which say "this connection is still alive" and "I'm message 5 in the sequence" and so on in order to maintain this fiction.
@@ -116,7 +116,7 @@ By having a stack of protocols we are able to apply [Separation Of Concerns](htt
![Communication Protocols Risks](/img/generated/risks/communication/communication_protocol_risks.png)
-Hopefully, the above example gives an indication of the usefulness of protocols within software. But for every protocol we use we have [Protocol Risk](Communication-Risk.md#protocol-risk). While this is a problem in human communication protocols, it's really common in computer communication because we create protocols _all the time_ in software.
+Hopefully, the above example gives an indication of the usefulness of protocols within software. But for every protocol we use we have [Protocol Risk](/tags/Protocol-Risk). While this is a problem in human communication protocols, it's really common in computer communication because we create protocols _all the time_ in software.
For example, as soon as we define a Javascript function (called **b** here), we are creating a protocol for other functions (**a** here) to use it:
@@ -141,7 +141,7 @@ function b(a, b, c, d /* new parameter */) {
Then, **a** will instantly have a problem calling it and there will be an error of some sort.
-[Protocol Risk](Communication-Risk.md#protocol-risk) also occurs when we use [Data Types](https://en.wikipedia.org/wiki/Data_type): whenever we change the data type, we need to correct the usages of that type. Note above, I've given the `JavaScript` example, but I'm going to switch to `TypeScript` now:
+[Protocol Risk](/tags/Protocol-Risk) also occurs when we use [Data Types](https://en.wikipedia.org/wiki/Data_type): whenever we change the data type, we need to correct the usages of that type. Note above, I've given the `JavaScript` example, but I'm going to switch to `TypeScript` now:
```javascript
interface BInput {
@@ -163,12 +163,12 @@ function a() {
By using a [static type checker](https://en.wikipedia.org/wiki/Type_system#Static_type_checking), we can identify issues like this, but there is a trade-off:
-- we mitigate [Protocol Risk](Communication-Risk.md#protocol-risk), because we define the protocols _once only_ in the program, and ensure that usages all match the specification.
-- but the tradeoff is more _finger-typing_, which means [Codebase Risk](Complexity-Risk.md#codebase-risk) in some circumstances.
+- we mitigate [Protocol Risk](/tags/Protocol-Risk), because we define the protocols _once only_ in the program, and ensure that usages all match the specification.
+- but the tradeoff is more _finger-typing_, which means [Codebase Risk](/tags/Codebase-Risk) in some circumstances.
Nevertheless, static type checking is so prevalent in software that clearly in most cases, the trade-off has been worth it: even languages like [Clojure](https://clojure.org) have been retro-fitted with [type checkers](https://github.com/clojure/core.typed).
-Let's look at some further types of [Protocol Risk](Communication-Risk.md#protocol-risk).
+Let's look at some further types of [Protocol Risk](/tags/Protocol-Risk).
#### Protocol Incompatibility
@@ -186,7 +186,7 @@ There are various mitigating strategies for this. We'll look at two now: **Back
#### Backward Compatibility
-Backwards Compatibility mitigates [Protocol Risk](Communication-Risk.md#protocol-risk). This means supporting the old protocol until it falls out of use. If a supplier is pushing for a change in protocol it either must ensure that it is Backwards Compatible with the clients it is communicating with, or make sure they are upgraded concurrently. When building [web services](https://en.wikipedia.org/wiki/Web_service), for example, it's common practice to version all API's so that you can manage the migration. Something like this:
+Backwards Compatibility mitigates [Protocol Risk](/tags/Protocol-Risk). This means supporting the old protocol until it falls out of use. If a supplier is pushing for a change in protocol it either must ensure that it is Backwards Compatible with the clients it is communicating with, or make sure they are upgraded concurrently. When building [web services](https://en.wikipedia.org/wiki/Web_service), for example, it's common practice to version all API's so that you can manage the migration. Something like this:
- Supplier publishes `/api/v1/something`.
- Clients use `/api/v1/something`.
@@ -197,7 +197,7 @@ Backwards Compatibility mitigates [Protocol Risk](Communication-Risk.md#protocol
#### Forward Compatibility
-`HTML` and `HTTP` provide "graceful failure" to mitigate [Protocol Risk](Communication-Risk.md#protocol-risk): while it's expected that all clients can parse the syntax of `HTML` and `HTTP`, it's not necessary for them to be able to handle all of the tags, attributes and rules they see. The specification for both these standards is that if you don't understand something, ignore it. Designing with this in mind means that old clients can always at least cope with new features, but it's not always possible.
+`HTML` and `HTTP` provide "graceful failure" to mitigate [Protocol Risk](/tags/Protocol-Risk): while it's expected that all clients can parse the syntax of `HTML` and `HTTP`, it's not necessary for them to be able to handle all of the tags, attributes and rules they see. The specification for both these standards is that if you don't understand something, ignore it. Designing with this in mind means that old clients can always at least cope with new features, but it's not always possible.
`JavaScript` _can't_ support this: because the meaning of the next instruction will often depend on the result of the previous one.
@@ -205,6 +205,6 @@ Do human languages support forward compatibility? To some extent! New words ar
#### Protocol Implementation
-A second aspect of [Protocol Risk](Communication-Risk.md#protocol-risk) exists in heterogeneous computing environments where protocols have been independently implemented based on standards. For example, there are now so many different browsers, all supporting variations of `HTTP`, `HTML` and `JavaScript` that it becomes impossible to test a website comprehensively over all the different permutations.
+A second aspect of [Protocol Risk](/tags/Protocol-Risk) exists in heterogeneous computing environments where protocols have been independently implemented based on standards. For example, there are now so many different browsers, all supporting variations of `HTTP`, `HTML` and `JavaScript` that it becomes impossible to test a website comprehensively over all the different permutations.
-To mitigate as much [Protocol Risk](Communication-Risk.md#protocol-risk) as possible, generally we test web sites in a subset of browsers, and use a lowest-common-denominator approach to choosing protocol and language features.
+To mitigate as much [Protocol Risk](/tags/Protocol-Risk) as possible, generally we test web sites in a subset of browsers, and use a lowest-common-denominator approach to choosing protocol and language features.
diff --git a/docs/risks/Communication-Risks/Trust-And-Belief-Risk.md b/docs/risks/Communication-Risks/Trust-And-Belief-Risk.md
index e681f8de0..b97c800c8 100644
--- a/docs/risks/Communication-Risks/Trust-And-Belief-Risk.md
+++ b/docs/risks/Communication-Risks/Trust-And-Belief-Risk.md
@@ -13,11 +13,11 @@ part_of: Communication Risk
---
-Although protocols can sometimes handle security features of communication (such as [Authentication](https://en.wikipedia.org/wiki/Authentication) and preventing [man-in-the-middle attacks](https://en.wikipedia.org/wiki/Man-in-the-middle_attack)), trust goes further than this, it is the flip-side of [Agency Risk](Agency-Risk.md), which we will look at later: can you be sure that the other party in the communication is acting in your best interests?
+Although protocols can sometimes handle security features of communication (such as [Authentication](https://en.wikipedia.org/wiki/Authentication) and preventing [man-in-the-middle attacks](https://en.wikipedia.org/wiki/Man-in-the-middle_attack)), trust goes further than this, it is the flip-side of [Agency Risk](/tags/Agency-Risk), which we will look at later: can you be sure that the other party in the communication is acting in your best interests?
Even if the **receiver** trusts the **sender**, they may not _believe_ the message. Let's look at some reasons for that:
-- **[Weltanschauung (World View)](https://en.wikipedia.org/wiki/World_view)**: the ethics, values and beliefs in the receiver's [Internal Model](../thinking/Glossary.md#internal-model) may be incompatible to those from the sender.
+- **[Weltanschauung (World View)](https://en.wikipedia.org/wiki/World_view)**: the ethics, values and beliefs in the receiver's [Internal Model](/thinking/Glossary.md#internal-model) may be incompatible to those from the sender.
- **[Relativism](https://en.wikipedia.org/wiki/Relativism)** is the concept that there are no universal truths. Every truth is from a frame of reference. For example, what constitutes _offensive language_ is dependent on the listener.
- **[Psycholinguistics](https://en.wikipedia.org/wiki/Psycholinguistics)** is the study of how humans acquire languages. There are different languages, dialects, and _industry dialects_. We all understand language in different ways, take different meanings and apply different contexts to the messages.
diff --git a/docs/risks/Communication-Risks/Wrap-Up.md b/docs/risks/Communication-Risks/Wrap-Up.md
index 744913965..8cd434abc 100644
--- a/docs/risks/Communication-Risks/Wrap-Up.md
+++ b/docs/risks/Communication-Risks/Wrap-Up.md
@@ -9,6 +9,6 @@ sidebar_position: 8
![Communication Risks, Summarised](/img/generated/risks/communication/communication_3.png)
-In this section, we've looked at [Communication Risk](Communication-Risk.md) itself and broken it down into six sub-types of risk as shown in the diagram above. Again, we are calling out _patterns_ here. You could classify communication risks in other ways, but concepts like [Learning Curve Risk](#learning-curve-risk) and [Invisibility Risk](#invisibility-risk) we will be using again in again in Risk-First.
+In this section, we've looked at [Communication Risk](/tags/Communication-Risk) itself and broken it down into six sub-types of risk as shown in the diagram above. Again, we are calling out _patterns_ here. You could classify communication risks in other ways, but concepts like [Learning Curve Risk](#learning-curve-risk) and [Invisibility Risk](#invisibility-risk) we will be using again in again in Risk-First.
-In the next section we will address complexity head-on and understand how [Complexity Risk](Complexity-Risk.md) manifests in software projects.
+In the next section we will address complexity head-on and understand how [Complexity Risk](/tags/Complexity-Risk) manifests in software projects.
diff --git a/docs/risks/Complexity-Risk.md b/docs/risks/Complexity-Risk.md
index 90c39b25f..987562fd0 100644
--- a/docs/risks/Complexity-Risk.md
+++ b/docs/risks/Complexity-Risk.md
@@ -16,12 +16,12 @@ part_of: Operational Risk
-[Complexity Risk](Complexity-Risk.md) is the [risk](../thinking/Glossary.md#risk) to your project due to its underlying "complexity". Here, we will break down exactly what we mean by complexity, look at where it can hide on a software project and discuss some ways in which we can manage this important risk.
+[Complexity Risk](/tags/Complexity-Risk) is the [risk](/thinking/Glossary.md#risk) to your project due to its underlying "complexity". Here, we will break down exactly what we mean by complexity, look at where it can hide on a software project and discuss some ways in which we can manage this important risk.
Here we will:
- Look at two ways in which complexity is measured, via [Kolmogorov Complexity](Complexity-Risk.md#kolmogorov-complexity) and [Graph-Connectivity](Complexity-Risk.md#connectivity).
- - Define [Complexity Risk](Complexity-Risk.md), and the related risks of [Codebase Risk](Complexity-Risk.md#codebase-risk) (complexity in your codebase) and [Dead-End Risk](Complexity-Risk.md#dead-end-risk) (risk of implementations getting "stuck").
+ - Define [Complexity Risk](/tags/Complexity-Risk), and the related risks of [Codebase Risk](/tags/Codebase-Risk) (complexity in your codebase) and [Dead-End Risk](/tags/Dead-End-Risk) (risk of implementations getting "stuck").
- Discuss ways to think about complexity: as [mass](Complexity-Risk.md#complexity-is-mass), [technical debt](Complexity-Risk.md#technical-debt) and [mess](Complexity-Risk.md#kitchen-analogy).
- Discuss ways to manage complexity risk, such as modularisation, hierarchy, use of languages and libraries and by avoiding feature creep.
- Discuss places where Complexity Risk [manifests](Complexity-Risk.md#where-complexity-hides) in computing.
@@ -34,7 +34,7 @@ Complexity arises in software projects in a number of different ways. We're goi
> “Measuring programming progress by lines of code is like measuring aircraft building progress by weight.” - Bill Gates
-The size of your codebase, the amount of code, the number of modules, the interconnectedness of the modules and how well-factored the code is all contribute to [Codebase Risk](Complexity-Risk.md#codebase-risk): a specific type of [Complexity Risk](Complexity-Risk.md) meaning _the complexity of your codebase_.
+The size of your codebase, the amount of code, the number of modules, the interconnectedness of the modules and how well-factored the code is all contribute to [Codebase Risk](/tags/Codebase-Risk): a specific type of [Complexity Risk](/tags/Complexity-Risk) meaning _the complexity of your codebase_.
Before we look at the implications of this risk, let's look at some prior-art on how to measure this complexity.
@@ -91,11 +91,11 @@ function out() { (7 )
### Abstraction
-What's happening here is that we're _exploiting a pattern_: we noticed that `abcd` occurs several times, so we defined it a single time and then used it over and over, like a stamp. This is called [abstraction](../thinking/Glossary.md#abstraction).
+What's happening here is that we're _exploiting a pattern_: we noticed that `abcd` occurs several times, so we defined it a single time and then used it over and over, like a stamp. This is called [abstraction](/thinking/Glossary.md#abstraction).
By applying abstraction, we can improve in the direction of the Kolmogorov lower bound. By allowing ourselves to say that _symbols_ (like `out` and `ABCD`) are worth one complexity point, we've allowed that we can be descriptive in naming `function` and `const`. Naming things is an important part of abstraction, because to use something, you have to be able to refer to it.
-Generally, the more complex a piece of software is, the more difficulty users will have [understanding it](Feature-Risk.md#conceptual-integrity-risk), and the more work developers will have changing it.
+Generally, the more complex a piece of software is, the more difficulty users will have [understanding it](/tags/Conceptual-Integrity-Risk), and the more work developers will have changing it.
Although we should prefer the third version of our code over either the first or second (because of its brevity) we could go further down into [Code Golf](https://en.wikipedia.org/wiki/Code_golf) territory. The following javascript program plays [FizzBuzz](https://en.wikipedia.org/wiki/Fizz_buzz) up to 100, but is less readable than you might hope.
@@ -105,23 +105,23 @@ for(i=0;i<100;)document.write(((++i%3?'':'Fizz')+
(total: 62)
```
-So there is at some point a trade-off to be made between [Complexity Risk](Complexity-Risk.md) and [Communication Risk](Communication-Risk.md). That is, after a certain point, reducing Kolmogorov Complexity further risks making the program less intelligible.
+So there is at some point a trade-off to be made between [Complexity Risk](/tags/Complexity-Risk) and [Communication Risk](/tags/Communication-Risk). That is, after a certain point, reducing Kolmogorov Complexity further risks making the program less intelligible.
### Refactoring
![Using Refactoring and Abstraction to reduce Codebase Risk](/img/generated/risks/complexity/refactoring.png)
-Abstraction is therefore a key tool in the battle against [Complexity Risk](Complexity-Risk.md): it allows us to jettison repetition. But, as the code-golf example shows, you can go too far. So an important part of software development is picking the _right_ abstractions: ones that are useful, durable and pervasive.
+Abstraction is therefore a key tool in the battle against [Complexity Risk](/tags/Complexity-Risk): it allows us to jettison repetition. But, as the code-golf example shows, you can go too far. So an important part of software development is picking the _right_ abstractions: ones that are useful, durable and pervasive.
Time spent replacing poor abstractions with better ones is called _refactoring_.
-The above diagram demonstrates that a key practice in battling [Codebase Risk](Complexity-Risk.md#codebase-risk) is choosing **a minimal set of useful abstractions**. The attendant risk in doing that work (the downside) is the _time spent doing it_. That is, [Schedule Risk](Scarcity-Risk.md#schedule-risk).
+The above diagram demonstrates that a key practice in battling [Codebase Risk](/tags/Codebase-Risk) is choosing **a minimal set of useful abstractions**. The attendant risk in doing that work (the downside) is the _time spent doing it_. That is, [Schedule Risk](/tags/Schedule-Risk).
Sometimes it is better to have an ok-ish abstraction _now_ rather than a brilliant abstraction _too late_.
### Languages and Dependencies
-The above Javascript example also demonstrates a second way in which we can manage [Codebase Risk](Complexity-Risk.md#codebase-risk).
+The above Javascript example also demonstrates a second way in which we can manage [Codebase Risk](/tags/Codebase-Risk).
In the third version of the program, we used the method `.repeat()`, which allowed us to save a further 16 symbols.
@@ -129,7 +129,7 @@ In the third version of the program, we used the method `.repeat()`, which allow
![Using Libraries and Languages to reduce Codebase Risk](/img/generated/risks/complexity/libraries.png)
-So as the above diagram shows, we can also reduce [Codebase Risk](Complexity-Risk.md#codebase-risk) in our choice of _languages_ and _third party libraries_. This doesn't come without a cost, though. We are trading-off our own [Codebase Risk](Complexity-Risk.md#codebase-risk) but increasing [Dependency Risk](Dependency-Risk.md) and [Boundary Risk](Boundary-Risk.md) instead.
+So as the above diagram shows, we can also reduce [Codebase Risk](/tags/Codebase-Risk) in our choice of _languages_ and _third party libraries_. This doesn't come without a cost, though. We are trading-off our own [Codebase Risk](/tags/Codebase-Risk) but increasing [Dependency Risk](/tags/Dependency-Risk) and [Boundary Risk](/tags/Boundary-Risk) instead.
## Connectivity
@@ -210,7 +210,7 @@ Secondly, it's not apparent to **i** that **j** _even exists_: we have hidden th
![Modularisation and Hierarchy](/img/generated/risks/complexity/modularisation.png)
-The trade-off of modularisation/hierarchy is shown in the above diagram, and it's our third tool for battling [Codebase Risk](Complexity-Risk.md#codebase-risk).
+The trade-off of modularisation/hierarchy is shown in the above diagram, and it's our third tool for battling [Codebase Risk](/tags/Codebase-Risk).
But we don't just see this in software, it's everywhere in our lives: societies, business, and living organisms all use this technique. For example in our bodies we have:
@@ -223,9 +223,9 @@ The great complexity-reducing mechanism of modularisation is that _you only have
## Analogies
-So, we've looked at some measures of software structure complexity. We can say "this is more complex than this" for a given piece of code or structure. We've also looked at three ways to manage it: [Abstraction](../thinking/Glossary.md#abstraction) and [Modularisation](Complexity-Risk.md#hierarchies-and-modularisation) and via [Dependencies](Complexity-Risk.md#languages-and-dependencies).
+So, we've looked at some measures of software structure complexity. We can say "this is more complex than this" for a given piece of code or structure. We've also looked at three ways to manage it: [Abstraction](/thinking/Glossary.md#abstraction) and [Modularisation](Complexity-Risk.md#hierarchies-and-modularisation) and via [Dependencies](Complexity-Risk.md#languages-and-dependencies).
-However, we've not really said why complexity entails [Risk](../thinking/Glossary.md#attendant-risk). So let's address that now by looking at three analogies, [Mass](Complexity-Risk.md#complexity-is-mass), [Technical Debt](Complexity-Risk.md#technical-debt) and [Mess](Complexity-Risk.md#kitchen-analogy)
+However, we've not really said why complexity entails [Risk](/thinking/Glossary.md#attendant-risk). So let's address that now by looking at three analogies, [Mass](Complexity-Risk.md#complexity-is-mass), [Technical Debt](Complexity-Risk.md#technical-debt) and [Mess](Complexity-Risk.md#kitchen-analogy)
### Complexity is Mass
@@ -247,23 +247,23 @@ I'm not an expert in physics _at all_ so there is every chance that I am pushing
If we want to move _fast_ we need simple code-bases.
-At a basic level, [Complexity Risk](Complexity-Risk.md) heavily impacts on [Schedule Risk](Scarcity-Risk.md#schedule-risk): more complexity means you need more force to get things done, which takes longer.
+At a basic level, [Complexity Risk](/tags/Complexity-Risk) heavily impacts on [Schedule Risk](/tags/Schedule-Risk): more complexity means you need more force to get things done, which takes longer.
### Technical Debt
-The most common way we talk about [Complexity Risk](Complexity-Risk.md) in software is as [Technical Debt](Complexity-Risk.md#technical-debt):
+The most common way we talk about [Complexity Risk](/tags/Complexity-Risk) in software is as [Technical Debt](Complexity-Risk.md#technical-debt):
> "Shipping first time code is like going into debt. A little debt speeds development so long as it is paid back promptly with a rewrite... The danger occurs when the debt is not repaid. Every minute spent on not-quite-right code counts as interest on that debt. Entire engineering organisations can be brought to a stand-still under the debt load of an unconsolidated implementation, object-oriented or otherwise." - [Ward Cunningham, 1992, _Wikipedia, Technical Debt_](https://en.wikipedia.org/wiki/Technical_debt)
-Building a low-complexity first-time solution is often a waste: in the first version, we're usually interested in reducing [Feature Risk](Feature-Risk.md) as fast as possible. That is, putting working software in front of users to get [feedback](../thinking/Meeting-Reality.md). We would rather carry [Complexity Risk](Complexity-Risk.md) than take on more [Schedule Risk](Scarcity-Risk.md#schedule-risk).
+Building a low-complexity first-time solution is often a waste: in the first version, we're usually interested in reducing [Feature Risk](/tags/Feature-Risk) as fast as possible. That is, putting working software in front of users to get [feedback](/thinking/Meeting-Reality.md). We would rather carry [Complexity Risk](/tags/Complexity-Risk) than take on more [Schedule Risk](/tags/Schedule-Risk).
-So a quick-and-dirty, over-complex implementation mitigates the same [Feature Risk](Feature-Risk.md) and allows you to [Meet Reality](../thinking/Meeting-Reality.md) faster.
+So a quick-and-dirty, over-complex implementation mitigates the same [Feature Risk](/tags/Feature-Risk) and allows you to [Meet Reality](/thinking/Meeting-Reality.md) faster.
-But having mitigated the [Feature Risk](Feature-Risk.md) this way, you are likely exposed to a higher level of [Complexity Risk](Complexity-Risk.md) than would be desirable. This "carries forward" and means that in the future, you're going to be slower. As in the case of a real debt, "servicing" the debt incurs a steady, regular cost.
+But having mitigated the [Feature Risk](/tags/Feature-Risk) this way, you are likely exposed to a higher level of [Complexity Risk](/tags/Complexity-Risk) than would be desirable. This "carries forward" and means that in the future, you're going to be slower. As in the case of a real debt, "servicing" the debt incurs a steady, regular cost.
### Kitchen Analogy
-It’s often hard to make the case for minimising [Technical Debt](Complexity-Risk.md#technical-debt): it often feels that there are more important priorities, especially when technical debt can be “swept under the carpet” and forgotten about until later. (See [Discounting](../thinking/Evaluating-Risk.md#discounting-the-future-to-zero).)
+It’s often hard to make the case for minimising [Technical Debt](Complexity-Risk.md#technical-debt): it often feels that there are more important priorities, especially when technical debt can be “swept under the carpet” and forgotten about until later. (See [Discounting](/thinking/Evaluating-Risk.md#discounting-the-future-to-zero).)
One helpful analogy I have found is to imagine your code-base is a kitchen. After preparing a meal (i.e. delivering the first implementation), _you need to tidy up the kitchen_. This is just something everyone does as a matter of _basic sanitation_.
@@ -273,7 +273,7 @@ It's not long before someone comes down with food poisoning.
![Complexity Risk and its implications](/img/generated/risks/complexity/complexity-risk-impact.png)
-We wouldn't tolerate this behaviour in a restaurant kitchen, so why put up with it in a software project? This state-of-affairs is illustrated in the above diagram. Not only does [Complexity Risk](Complexity-Risk.md) slow down future development, it can be a cause of [Operational Risks](Operational-Risk.md) and [Security Risks](Agency-Risk.md#security).
+We wouldn't tolerate this behaviour in a restaurant kitchen, so why put up with it in a software project? This state-of-affairs is illustrated in the above diagram. Not only does [Complexity Risk](/tags/Complexity-Risk) slow down future development, it can be a cause of [Operational Risks](/tags/Operational-Risk) and [Security Risks](Agency-Risk.md#security).
### Feature Creep
@@ -284,17 +284,17 @@ In Brooks' essay "No Silver Bullet - Essence and Accident in Software Engineerin
The problem with this definition is that we are accepting features of our software as _essential_.
-Applying Risk-First, if you want to mitigate some [Feature Risk](Feature-Risk.md) then you have to pick up [Complexity Risk](Complexity-Risk.md) as a result. But, that's a _choice you get to make_.
+Applying Risk-First, if you want to mitigate some [Feature Risk](/tags/Feature-Risk) then you have to pick up [Complexity Risk](/tags/Complexity-Risk) as a result. But, that's a _choice you get to make_.
![Mitigating Feature Risk](/img/generated/risks/complexity/feature-creep.png)
-Therefore, [Feature Creep](https://en.wikipedia.org/wiki/Feature_creep) (or [Gold Plating](https://en.wikipedia.org/wiki/Gold_plating_(software_engineering))) is a failure to observe this basic equation: instead of considering this trade off, you're building _every feature possible_. This will impact on [Complexity Risk](Complexity-Risk.md).
+Therefore, [Feature Creep](https://en.wikipedia.org/wiki/Feature_creep) (or [Gold Plating](https://en.wikipedia.org/wiki/Gold_plating_(software_engineering))) is a failure to observe this basic equation: instead of considering this trade off, you're building _every feature possible_. This will impact on [Complexity Risk](/tags/Complexity-Risk).
-Sometimes, feature-creep happens because either managers feel they need to keep their staff busy, or the staff decide on their own that they need to [keep themselves busy](Agency-Risk.md). This is something we'll return to in [Agency Risk](Agency-Risk.md).
+Sometimes, feature-creep happens because either managers feel they need to keep their staff busy, or the staff decide on their own that they need to [keep themselves busy](/tags/Agency-Risk). This is something we'll return to in [Agency Risk](/tags/Agency-Risk).
## Dead-End Risk
-[Dead-End Risk](Complexity-Risk.md#dead-end-risk) is where you take an action that you _think_ is useful, only to find out later that actually it was a dead-end and your efforts were wasted. Here, we'll see that [Complexity Risk](Complexity-Risk.md) is a big cause of this.
+[Dead-End Risk](/tags/Dead-End-Risk) is where you take an action that you _think_ is useful, only to find out later that actually it was a dead-end and your efforts were wasted. Here, we'll see that [Complexity Risk](/tags/Complexity-Risk) is a big cause of this.
### An Example
@@ -304,9 +304,9 @@ Finally, the team realises that actually authentication would be something that
At this point, you realise you're in a **Dead End**:
- - **Option 1: Continue.** You carry on making minor incremental improvements to the accounting authentication system (carrying the extra [Complexity Risk](Complexity-Risk.md) of the duplicated functionality).
- - **Option 2: Merge.** You rip out the accounting authentication system and merge in the Approvals authentication system, consuming lots of development time in the process, due to the difficulty in migrating users from the old to new way of working. There is [Implementation Risk](Feature-Risk.md#implementation-risk) here.
- - **Option 3: Remove.** You start again, trying to take into account both sets of requirements at the same time, again, possibly surfacing new hidden [Complexity Risk](Complexity-Risk.md) due to the combined approach. Rewriting code can _seem_ like a way to mitigate [Complexity Risk](Complexity-Risk.md) but it usually doesn't work out too well. As Joel Spolsky says:
+ - **Option 1: Continue.** You carry on making minor incremental improvements to the accounting authentication system (carrying the extra [Complexity Risk](/tags/Complexity-Risk) of the duplicated functionality).
+ - **Option 2: Merge.** You rip out the accounting authentication system and merge in the Approvals authentication system, consuming lots of development time in the process, due to the difficulty in migrating users from the old to new way of working. There is [Implementation Risk](/tags/Implementation-Risk) here.
+ - **Option 3: Remove.** You start again, trying to take into account both sets of requirements at the same time, again, possibly surfacing new hidden [Complexity Risk](/tags/Complexity-Risk) due to the combined approach. Rewriting code can _seem_ like a way to mitigate [Complexity Risk](/tags/Complexity-Risk) but it usually doesn't work out too well. As Joel Spolsky says:
> There’s a subtle reason that programmers always want to throw away the code and start over. The reason is that they think the old code is a mess. And here is the interesting observation: they are probably wrong. The reason that they think the old code is a mess is because of a cardinal, fundamental law of programming: _It’s harder to read code than to write it._ - [Things You Should Never Do, Part 1, _Joel Spolsky_](https://www.joelonsoftware.com/2000/04/06/things-you-should-never-do-part-i/)
@@ -316,13 +316,13 @@ Whichever option you choose, this is a [Dead End](#dead-end-risk) because with h
Working in a complex environment makes it harder to see developmental dead-ends.
-Sometimes, the path across the [Risk Landscape](Risk-Landscape.md) will take you to dead ends, and the only benefit to be gained is experience. No one deliberately chooses a dead end - often you can take an action that doesn't pay off, but frequently the dead end appears from nowhere: it's a [Hidden Risk](../thinking/Glossary.md#hidden-risk). The source of a lot of this hidden risk is the complexity of the [risk landscape](../thinking/Glossary.md#risk-landscape).
+Sometimes, the path across the [Risk Landscape](Risk-Landscape.md) will take you to dead ends, and the only benefit to be gained is experience. No one deliberately chooses a dead end - often you can take an action that doesn't pay off, but frequently the dead end appears from nowhere: it's a [Hidden Risk](/thinking/Glossary.md#hidden-risk). The source of a lot of this hidden risk is the complexity of the [risk landscape](/thinking/Glossary.md#risk-landscape).
-[Version Control Systems](https://en.wikipedia.org/wiki/Version_control) like [Git](https://en.wikipedia.org/wiki/Git) are a useful mitigation of [Dead-End Risk](Complexity-Risk.md#dead-end-risk), because using them means that at least you can _go back_ to the point where you made the bad decision and go a different way. Additionally, they provide you with backups against the often inadvertent [Dead-End Risk](Complexity-Risk.md#dead-end-risk) of someone wiping the hard-disk.
+[Version Control Systems](https://en.wikipedia.org/wiki/Version_control) like [Git](https://en.wikipedia.org/wiki/Git) are a useful mitigation of [Dead-End Risk](/tags/Dead-End-Risk), because using them means that at least you can _go back_ to the point where you made the bad decision and go a different way. Additionally, they provide you with backups against the often inadvertent [Dead-End Risk](/tags/Dead-End-Risk) of someone wiping the hard-disk.
## Where Complexity Hides
-So far, we've focused mainly on [Codebase Risk](Complexity-Risk.md#codebase-risk), but this isn't the only place complexity appears in software. We're going to cover a few of these areas now, but be warned, this is not a complete list by any means:
+So far, we've focused mainly on [Codebase Risk](/tags/Codebase-Risk), but this isn't the only place complexity appears in software. We're going to cover a few of these areas now, but be warned, this is not a complete list by any means:
- Algorithmic (Space and Time) Complexity
- Memory Management
@@ -335,23 +335,23 @@ So far, we've focused mainly on [Codebase Risk](Complexity-Risk.md#codebase-risk
There is a whole branch of complexity theory devoted to how the software _runs_, namely [Big O Complexity](https://en.wikipedia.org/wiki/Big_O_notation).
-Once running, an algorithm or data structure will consume space or runtime dependent on its performance characteristics, which may well have an impact on the [Operational Risk](Operational-Risk.md) of the software. Using off-the-shelf data structures and algorithms helps, but you still need to know their performance characteristics.
+Once running, an algorithm or data structure will consume space or runtime dependent on its performance characteristics, which may well have an impact on the [Operational Risk](/tags/Operational-Risk) of the software. Using off-the-shelf data structures and algorithms helps, but you still need to know their performance characteristics.
The [Big O Cheat Sheet](https://bigocheatsheet.com) is a wonderful resource to investigate this further.
### Memory Management
-Memory Management (and more generally, all resource management in software) is another place where [Complexity Risk](Complexity-Risk.md) hides:
+Memory Management (and more generally, all resource management in software) is another place where [Complexity Risk](/tags/Complexity-Risk) hides:
> "Memory leaks are a common error in programming, especially when using languages that have no built in automatic garbage collection, such as C and C++." - [Memory Leak, _Wikipedia_](https://en.wikipedia.org/wiki/Memory_leak)
-[Garbage Collectors](https://en.wikipedia.org/wiki/Garbage_collection_(computer_science)) (as found in Javascript or Java) offer you the deal that they will mitigate the [Complexity Risk](Complexity-Risk.md) of you having to manage your own memory, but in return perhaps give you fewer guarantees about the _performance_ of your software. Again, there are times when you can't accommodate this [Operational Risk](Operational-Risk.md), but these are rare and usually only affect a small portion of an entire software-system.
+[Garbage Collectors](https://en.wikipedia.org/wiki/Garbage_collection_(computer_science)) (as found in Javascript or Java) offer you the deal that they will mitigate the [Complexity Risk](/tags/Complexity-Risk) of you having to manage your own memory, but in return perhaps give you fewer guarantees about the _performance_ of your software. Again, there are times when you can't accommodate this [Operational Risk](/tags/Operational-Risk), but these are rare and usually only affect a small portion of an entire software-system.
### Protocols And Types
-As we saw in [Communication Risk](Communication-Risk.md), whenever two components of a software system need to interact, they have to establish a protocol for doing so. As systems become more complex, and the connectedness increases, it becomes harder to manage the risk around versioning protocols. This becomes especially true when operating beyond the edge of the compiler's domain.
+As we saw in [Communication Risk](/tags/Communication-Risk), whenever two components of a software system need to interact, they have to establish a protocol for doing so. As systems become more complex, and the connectedness increases, it becomes harder to manage the risk around versioning protocols. This becomes especially true when operating beyond the edge of the compiler's domain.
-Although type-checking helps mitigate [Protocol Risk](Communication-Risk.md#protocol-risk), when software systems grow large it becomes hard to communicate intent and keep connectivity low. You can end up with "The Big Ball Of Mud":
+Although type-checking helps mitigate [Protocol Risk](/tags/Protocol-Risk), when software systems grow large it becomes hard to communicate intent and keep connectivity low. You can end up with "The Big Ball Of Mud":
> "A big ball of mud is a software system that lacks a perceivable architecture. Although undesirable from a software engineering point of view, such systems are common in practice due to business pressures, developer turnover and code entropy. " - [Big Ball Of Mud, _Wikipedia_](https://en.wikipedia.org/wiki/Big_ball_of_mud)
@@ -361,27 +361,27 @@ Although modern languages include plenty of concurrency primitives (such as the
[Race conditions](https://en.wikipedia.org/wiki/Race_condition) and [Deadlocks](https://en.wikipedia.org/wiki/Deadlock) abound in over-complicated concurrency designs: complexity issues are magnified by concurrency concerns, and are also hard to test and debug.
-Recently, languages such as [Clojure](https://clojure.org) have introduced [persistent collections](https://en.wikipedia.org/wiki/Persistent_data_structure) to alleviate concurrency issues. The basic premise is that any time you want to _change_ the contents of a collection, you get given back a _new collection_. So, any collection instance is immutable once created. The trade-off is again speed to mitigate [Complexity Risk](Complexity-Risk.md).
+Recently, languages such as [Clojure](https://clojure.org) have introduced [persistent collections](https://en.wikipedia.org/wiki/Persistent_data_structure) to alleviate concurrency issues. The basic premise is that any time you want to _change_ the contents of a collection, you get given back a _new collection_. So, any collection instance is immutable once created. The trade-off is again speed to mitigate [Complexity Risk](/tags/Complexity-Risk).
-An important lesson here is that choice of language can reduce complexity: and we'll come back to this in [Software Dependency Risk](Software-Dependency-Risk.md).
+An important lesson here is that choice of language can reduce complexity: and we'll come back to this in [Software Dependency Risk](/tags/Software-Dependency-Risk).
### Networking / Security
-There are plenty of [Complexity Risk](Complexity-Risk.md) perils in _anything_ to do with networked code, chief amongst them being error handling and (again) [protocol evolution](Communication-Risk.md#protocol-risk).
+There are plenty of [Complexity Risk](/tags/Complexity-Risk) perils in _anything_ to do with networked code, chief amongst them being error handling and (again) [protocol evolution](/tags/Protocol-Risk).
In the case of security considerations, exploits _thrive_ on the complexity of your code, and the weaknesses that occur because of it. In particular, Schneier's Law says, never implement your own cryptographic scheme:
> "Anyone, from the most clueless amateur to the best cryptographer, can create an algorithm that he himself can't break. It's not even hard. What is hard is creating an algorithm that no one else can break, even after years of analysis." - [Bruce Schneier, 1998](https://en.wikipedia.org/wiki/Bruce_Schneier#Cryptography)
-Luckily, most good languages include cryptographic libraries that you can include to mitigate these [Complexity Risks](Complexity-Risk.md) from your own code-base.
+Luckily, most good languages include cryptographic libraries that you can include to mitigate these [Complexity Risks](/tags/Complexity-Risk) from your own code-base.
-This is a strong argument for the use of libraries. But when should you use a library and when should you code-your-own? This is covered further in the section on [Software Dependency Risk](Software-Dependency-Risk.md).
+This is a strong argument for the use of libraries. But when should you use a library and when should you code-your-own? This is covered further in the section on [Software Dependency Risk](/tags/Software-Dependency-Risk).
### The Environment
-The complexity of software tends to reflect the complexity of the environment it runs in, and complex software environments are more difficult to reason about, and more susceptible to [Operational Risk](Operational-Risk.md) and [Security-Risk](Agency-Risk.md#security).
+The complexity of software tends to reflect the complexity of the environment it runs in, and complex software environments are more difficult to reason about, and more susceptible to [Operational Risk](/tags/Operational-Risk) and [Security-Risk](Agency-Risk.md#security).
In particular, when we talk about the environment, we are talking about the number of external dependencies that the software has, and the risks we face when relying on those dependencies.
-So the next stop in the tour is a closer look at [Dependency Risk](Dependency-Risk.md).
+So the next stop in the tour is a closer look at [Dependency Risk](/tags/Dependency-Risk).
diff --git a/docs/risks/Coordination-Risk.md b/docs/risks/Coordination-Risk.md
index c4e68d344..0b803af84 100644
--- a/docs/risks/Coordination-Risk.md
+++ b/docs/risks/Coordination-Risk.md
@@ -16,24 +16,24 @@ part_of: Operational Risk
-As in [Agency Risk](Agency-Risk.md), we are going to use the term _agent_, which refers to anything with [agency](Agency-Risk.md#software-processes) in a system to make decisions: that is, an agent has an [Internal Model](../thinking/Glossary.md#internal-model) and can [take actions](../thinking/Glossary.md#taking-action) based on it. Here, we work on the assumption that the agents _are_ working towards a common [Goal](../thinking/Glossary.md#goal), even though in reality it's not always the case, as we saw in the section on [Agency Risk](Agency-Risk.md).
+As in [Agency Risk](/tags/Agency-Risk), we are going to use the term _agent_, which refers to anything with [agency](Agency-Risk.md#software-processes) in a system to make decisions: that is, an agent has an [Internal Model](/thinking/Glossary.md#internal-model) and can [take actions](/thinking/Glossary.md#taking-action) based on it. Here, we work on the assumption that the agents _are_ working towards a common [Goal](/thinking/Glossary.md#goal), even though in reality it's not always the case, as we saw in the section on [Agency Risk](/tags/Agency-Risk).
-[Coordination Risk](Coordination-Risk.md) is the risk that agents can fail to coordinate to meet their common goal and end up making things worse. [Coordination Risk](Coordination-Risk.md) is embodied in the phrase "Too Many Cooks Spoil The Broth": more people, opinions or _agents_ often make results worse.
+[Coordination Risk](/tags/Coordination-Risk) is the risk that agents can fail to coordinate to meet their common goal and end up making things worse. [Coordination Risk](/tags/Coordination-Risk) is embodied in the phrase "Too Many Cooks Spoil The Broth": more people, opinions or _agents_ often make results worse.
-In this section, we'll first build up [a model of Coordination Risk](#a-model-of-coordination-risk), describing exactly coordination means and why we do it. Then, we'll look at some classic [problems of coordination](#problems-of-coordination). Then, we're going to consider agency at several different levels (because of [Scale Invariance](../thinking/Crisis-Mode.md#invariance-2-scale-invariance)) . We'll look at:
+In this section, we'll first build up [a model of Coordination Risk](#a-model-of-coordination-risk), describing exactly coordination means and why we do it. Then, we'll look at some classic [problems of coordination](#problems-of-coordination). Then, we're going to consider agency at several different levels (because of [Scale Invariance](/thinking/Crisis-Mode.md#invariance-2-scale-invariance)) . We'll look at:
- [Team Decision Making](#decision-making),
- [Living Organisms](#in-living-organisms),
- [Larger Organisations](#large-organisations) and the staff within them,
- and [Software Processes](#in-software-processes).
-... and we'll consider how [Coordination Risk](Coordination-Risk.md) is a problem at each of these scales.
+... and we'll consider how [Coordination Risk](/tags/Coordination-Risk) is a problem at each of these scales.
-But for now, let's crack on and examine where [Coordination Risk](Coordination-Risk.md) comes from.
+But for now, let's crack on and examine where [Coordination Risk](/tags/Coordination-Risk) comes from.
## A Model Of Coordination Risk
-Earlier, in [Dependency Risk](Dependency-Risk.md), we looked at various resources (time, money, people, events etc) and showed how we could [depend on them](Dependency-Risk.md) taking on risk. Here, let's consider the situation where there is _competition for those dependencies_ due to [Scarcity Risk](Scarcity-Risk.md): other agents want to use them in a different way.
+Earlier, in [Dependency Risk](/tags/Dependency-Risk), we looked at various resources (time, money, people, events etc) and showed how we could [depend on them](/tags/Dependency-Risk) taking on risk. Here, let's consider the situation where there is _competition for those dependencies_ due to [Scarcity Risk]((/tags/Scarcity-Risk): other agents want to use them in a different way.
### Law Of Diminishing Returns
@@ -47,16 +47,16 @@ As you can see, by _sharing_, it's possible that the _total benefit_ is greater
Just two things are needed for competition to occur:
- - Multiple, Individual agents, trying to achieve [Goals](../thinking/Glossary.md#goal).
- - Scarce Resources, which the agents want to use as [Dependencies](Dependency-Risk.md).
+ - Multiple, Individual agents, trying to achieve [Goals](/thinking/Glossary.md#goal).
+ - Scarce Resources, which the agents want to use as [Dependencies](/tags/Dependency-Risk).
### Coordination via Communication
-The only way that the agents can move away from competition towards coordination is via [Communication](Communication-Risk.md), and this is where their coordination problems begin.
+The only way that the agents can move away from competition towards coordination is via [Communication](/tags/Communication-Risk), and this is where their coordination problems begin.
-[Coordination Risk](Coordination-Risk.md) commonly occurs where people have different ideas about how to achieve a [goal](../thinking/Glossary.md#goal), and they have different ideas because they have different [Internal Models](../thinking/Glossary.md#internal-model). As we saw in the section on [Communication Risk](Communication-Risk.md), we can only hope to synchronise [Internal Models](../thinking/Glossary.md#internal-model) if there are high-bandwidth [Channels](Communication-Risk.md#channels) available for communication.
+[Coordination Risk](/tags/Coordination-Risk) commonly occurs where people have different ideas about how to achieve a [goal](/thinking/Glossary.md#goal), and they have different ideas because they have different [Internal Models](/thinking/Glossary.md#internal-model). As we saw in the section on [Communication Risk](/tags/Communication-Risk), we can only hope to synchronise [Internal Models](/thinking/Glossary.md#internal-model) if there are high-bandwidth [Channels](Communication-Risk.md#channels) available for communication.
-You might think, therefore, that this is just another type of [Communication Risk](Communication-Risk.md) problem, and that's often a part of it, but even with synchronized [Internal Models](../thinking/Glossary.md#internal-model), coordination risk can occur. Imagine the example of people all trying to madly leave a burning building. They all have the same information (the building is on fire). If they coordinate, and leave in an orderly fashion, they might all get out. If they don't, and there's a scramble for the door, more people might die.
+You might think, therefore, that this is just another type of [Communication Risk](/tags/Communication-Risk) problem, and that's often a part of it, but even with synchronized [Internal Models](/thinking/Glossary.md#internal-model), coordination risk can occur. Imagine the example of people all trying to madly leave a burning building. They all have the same information (the building is on fire). If they coordinate, and leave in an orderly fashion, they might all get out. If they don't, and there's a scramble for the door, more people might die.
![Coordination Risk - Mitigated by Communication](/img/generated/risks/coordination/coordination-risk.png)
@@ -81,13 +81,13 @@ Let's unpack this idea, and review some classic problems of coordination, none o
6. **[Race Conditions](https://en.wikipedia.org/wiki/Race_condition)** are where we can't be sure of the result of a calculation, because it is dependent on the ordering of events within a system. For example, two separate threads writing the same memory at the same time (one ignoring and over-writing the work of the other) is a race.
-7. **Contention**: where there is [Scarcity Risk](Scarcity-Risk.md) for a [dependency](Dependency-Risk.md), we might want to make sure that everyone gets fair use of it, by taking turns, booking, queueing and so on. As we saw in [Scarcity Risk](Scarcity-Risk.md), sometimes this is handled for us by the [Dependency](Dependency-Risk.md) itself. However if it isn't, it's the _users_ of the dependency who'll need to coordinate to use the resource fairly, again, by communicating with each other.
+7. **Contention**: where there is [Scarcity Risk](/tags/Scarcity-Risk) for a [dependency](/tags/Dependency-Risk), we might want to make sure that everyone gets fair use of it, by taking turns, booking, queueing and so on. As we saw in [Scarcity Risk](/tags/Scarcity-Risk), sometimes this is handled for us by the [Dependency](/tags/Dependency-Risk) itself. However if it isn't, it's the _users_ of the dependency who'll need to coordinate to use the resource fairly, again, by communicating with each other.
## Decision Making
-Within a team, [Coordination Risk](Coordination-Risk.md) is at its core about resolving [Internal Model](../thinking/Glossary.md#internal-model) conflicts in order that everyone can agree on a [Goal](../thinking/Glossary.md#goal) and cooperate on getting it done. Therefore, [Coordination Risk](Coordination-Risk.md) is worse on projects with more members, and worse in organisations with more staff.
+Within a team, [Coordination Risk](/tags/Coordination-Risk) is at its core about resolving [Internal Model](/thinking/Glossary.md#internal-model) conflicts in order that everyone can agree on a [Goal](/thinking/Glossary.md#goal) and cooperate on getting it done. Therefore, [Coordination Risk](/tags/Coordination-Risk) is worse on projects with more members, and worse in organisations with more staff.
-As an individual, do you suffer from [Coordination Risk](Coordination-Risk.md) at all? Maybe: sometimes, you can feel "conflicted" about the best way to solve a problem. And weirdly, usually _not thinking about it_ helps. Sleeping too. (Rich Hickey calls this "[Hammock Driven Development](https://www.youtube.com/watch?v=f84n5oFoZBc)"). This is probably because, unbeknownst to you, your subconscious is furiously communicating internally, trying to resolve these conflicts itself, and will let you know when it has come to a resolution.
+As an individual, do you suffer from [Coordination Risk](/tags/Coordination-Risk) at all? Maybe: sometimes, you can feel "conflicted" about the best way to solve a problem. And weirdly, usually _not thinking about it_ helps. Sleeping too. (Rich Hickey calls this "[Hammock Driven Development](https://www.youtube.com/watch?v=f84n5oFoZBc)"). This is probably because, unbeknownst to you, your subconscious is furiously communicating internally, trying to resolve these conflicts itself, and will let you know when it has come to a resolution.
![Vroom And Yetton Decision Making Styles. "d" indicates authority in making a decision, circles are subordinates. Thin lines with arrow-heads show information flow, whilst thick lines show _opinions_ being passed around.](/img/generated/risks/coordination/vroom-yetton.png)
@@ -108,22 +108,22 @@ As an individual, do you suffer from [Coordination Risk](Coordination-Risk.md) a
**s** = subordinate
-At the top, you have the _least_ consultative styles, and at the bottom, the _most_. At the top, decisions are made with just the leader's [Internal Model](../thinking/Glossary.md#internal-model), but moving down, the [Internal Models](../thinking/Glossary.md#internal-model) of the _subordinates_ are increasingly brought into play.
+At the top, you have the _least_ consultative styles, and at the bottom, the _most_. At the top, decisions are made with just the leader's [Internal Model](/thinking/Glossary.md#internal-model), but moving down, the [Internal Models](/thinking/Glossary.md#internal-model) of the _subordinates_ are increasingly brought into play.
-The decisions at the top are faster, but don't do much for mitigating [Coordination Risk](Coordination-Risk.md). The ones below take longer (incurring [Schedule Risk](Scarcity-Risk.md#schedule-risk)) but mitigate more [Coordination Risk](Coordination-Risk.md). Group decision-making inevitably involves everyone _learning_ and improving their [Internal Models](../thinking/Glossary.md#internal-model).
+The decisions at the top are faster, but don't do much for mitigating [Coordination Risk](/tags/Coordination-Risk). The ones below take longer (incurring [Schedule Risk](/tags/Schedule-Risk)) but mitigate more [Coordination Risk](/tags/Coordination-Risk). Group decision-making inevitably involves everyone _learning_ and improving their [Internal Models](/thinking/Glossary.md#internal-model).
The trick is to be able to tell which approach is suitable at which time. Everyone is expected to make decisions _within their realm of expertise_: you can't have developers continually calling meetings to discuss whether they should be using an [Abstract Factory](https://en.wikipedia.org/wiki/Abstract_factory_pattern) or a [Factory Method](https://en.wikipedia.org/wiki/Factory_method_pattern): it would waste time. The critical question is therefore, "what's the biggest risk?"
- - Is the [Coordination Risk](Coordination-Risk.md) greater? Are we going to suffer [Dead End Risk](Complexity-Risk.md) if the decision is made wrongly? What if people don't agree with it? Poor leadership has an impact on [morale](Agency-Risk.md#morale-failure) too.
- - Is the [Schedule Risk](Scarcity-Risk.md#schedule-risk) greater? If you have a 1-hour meeting with eight people to decide a decision, that's _one person day_ gone right there: group decision making is _expensive_.
+ - Is the [Coordination Risk](/tags/Coordination-Risk) greater? Are we going to suffer [Dead End Risk](/tags/Complexity-Risk) if the decision is made wrongly? What if people don't agree with it? Poor leadership has an impact on [morale](Agency-Risk.md#morale-failure) too.
+ - Is the [Schedule Risk](/tags/Schedule-Risk) greater? If you have a 1-hour meeting with eight people to decide a decision, that's _one person day_ gone right there: group decision making is _expensive_.
-So _organisation_ can reduce [Coordination Risk](Coordination-Risk.md) but to make this work we need more _communication_, and this has attendant complexity and time costs.
+So _organisation_ can reduce [Coordination Risk](/tags/Coordination-Risk) but to make this work we need more _communication_, and this has attendant complexity and time costs.
### Staff As Agents
-Staff in a team have a dual nature: they are **Agents** and **Resources** at the same time. The team [depends](Dependency-Risk.md) on staff for their resource of _labour_, but they're also part of the decision making process of the team, because they have [_agency_](Agency-Risk.md) over their own actions.
+Staff in a team have a dual nature: they are **Agents** and **Resources** at the same time. The team [depends](/tags/Dependency-Risk) on staff for their resource of _labour_, but they're also part of the decision making process of the team, because they have [_agency_](/tags/Agency-Risk) over their own actions.
-Part of [Coordination Risk](Coordination-Risk.md) is about trying to mitigate differences in [Internal Models](../thinking/Glossary.md#internal-model). So it's worth considering how varied people's models can be:
+Part of [Coordination Risk](/tags/Coordination-Risk) is about trying to mitigate differences in [Internal Models](/thinking/Glossary.md#internal-model). So it's worth considering how varied people's models can be:
- Different skill levels
- Different experiences
@@ -135,26 +135,26 @@ The job of harmonising this on a project would seem to fall to the team leader,
> "The forming–storming–norming–performing model of group development was first proposed by Bruce Tuckman in 1965, who said that these phases are all necessary and inevitable in order for the team to grow, face up to challenges, tackle problems, find solutions, plan work, and deliver results." - [Tuckman's Stages Of Group Development, _Wikipedia_](https://en.wikipedia.org/wiki/Tuckman%27s_stages_of_group_development)
-Specifically this describes a process whereby a new group will form and then be required to work together. In the process, they will have many _disputes_. Ideally, the group will resolve these disputes internally and emerge as a team, with a common [Goal](../thinking/Glossary.md#goal).
+Specifically this describes a process whereby a new group will form and then be required to work together. In the process, they will have many _disputes_. Ideally, the group will resolve these disputes internally and emerge as a team, with a common [Goal](/thinking/Glossary.md#goal).
-Since [Coordination](Coordination-Risk.md) is about [Resource Allocation](Coordination-Risk.md#problems-of-coordination) the skills of staff can potentially be looked at as resources to allocate. This means handling [Coordination Risk](Coordination-Risk.md) issues like:
+Since [Coordination](/tags/Coordination-Risk) is about [Resource Allocation](Coordination-Risk.md#problems-of-coordination) the skills of staff can potentially be looked at as resources to allocate. This means handling [Coordination Risk](/tags/Coordination-Risk) issues like:
- - People leaving, taking their [Internal Models](../thinking/Glossary.md#internal-model) and expertise with them ([Key Person Risk](Scarcity-Risk.md#staff-risk)).
- - People requiring external training, to understand new tools and techniques ([Learning Curve Risk](Communication-Risk.md#learning-curve-risk)).
- - People being protective about their knowledge in order to protect their jobs ([Agency Risk](Agency-Risk.md)).
+ - People leaving, taking their [Internal Models](/thinking/Glossary.md#internal-model) and expertise with them ([Key Person Risk](Scarcity-Risk.md#staff-risk)).
+ - People requiring external training, to understand new tools and techniques ([Learning Curve Risk](/tags/Learning-Curve-Risk)).
+ - People being protective about their knowledge in order to protect their jobs ([Agency Risk](/tags/Agency-Risk)).
> "As a rough rule, three programmers organised into a team can do only twice the work of a single programmer of the same ability - because of time spent on coordination problems." - [Gerald Wienberg, _The Psychology of Computer Programming_](https://en.wikipedia.org/wiki/Gerald_Weinberg)
## In Living Organisms
-Vroom and Yetton's organisational model isn't relevant to just teams of people. We can see it in the natural world too. Although _the majority_ of cellular life on earth (by weight) is [single celled organisms](http://archive.today/2012.12.05-091021/http://www.stephenjaygould.org/library/gould_bacteria.html), the existence of _humans_ (to pick a single example) demonstrates that sometimes it's better for cells to try to mitigate [Coordination Risk](Coordination-Risk.md) and work as a team, accepting the [Complexity Risk](Complexity-Risk.md) and [Communication Risk](Communication-Risk.md) this entails. For example, in the human body, we have:
+Vroom and Yetton's organisational model isn't relevant to just teams of people. We can see it in the natural world too. Although _the majority_ of cellular life on earth (by weight) is [single celled organisms](http://archive.today/2012.12.05-091021/http://www.stephenjaygould.org/library/gould_bacteria.html), the existence of _humans_ (to pick a single example) demonstrates that sometimes it's better for cells to try to mitigate [Coordination Risk](/tags/Coordination-Risk) and work as a team, accepting the [Complexity Risk](/tags/Complexity-Risk) and [Communication Risk](/tags/Communication-Risk) this entails. For example, in the human body, we have:
- **Various [systems](https://en.wikipedia.org/wiki/List_of_systems_of_the_human_body)**: such as the [Respiratory System](https://en.wikipedia.org/wiki/Respiratory_system) or the [Digestive System](https://en.wikipedia.org/wiki/Human_digestive_system). Each of these systems contains...
- **Organs**, such as the heart or lungs, which contain..
- **Tissues**, which contain...
- **Cells** of different types. (Even cells are complex systems containing multiple different, communicating sub-systems.)
-There is huge attendant [Coordination Risk](Coordination-Risk.md) to running a complex multi-cellular system like the human body, but given the success of humanity as a species, you must conclude that these steps on the _evolutionary_ [Risk Landscape](Risk-Landscape.md) have benefited us in our ecological niche.
+There is huge attendant [Coordination Risk](/tags/Coordination-Risk) to running a complex multi-cellular system like the human body, but given the success of humanity as a species, you must conclude that these steps on the _evolutionary_ [Risk Landscape](Risk-Landscape.md) have benefited us in our ecological niche.
### Decision Making
@@ -168,17 +168,17 @@ Working in a large organisation often feels like being a cell in a larger organi
_Less_ consultative decision making styles are more appropriate then when we don't have the luxury of high-bandwidth channels for discussion. When the number of parties rises above a room-full of people it's not possible to hear everyone's voice. As you can see from the table above, for **CII** and **GII** decision-making styles, the amount of communication increases non-linearly with the number of participants, so we need something simpler.
-As we saw in the [Complexity Risk](Complexity-Risk.md) section, hierarchies are an excellent way of economising on number of different communication channels, and we use these frequently when there are lots of parties to coordinate.
+As we saw in the [Complexity Risk](/tags/Complexity-Risk) section, hierarchies are an excellent way of economising on number of different communication channels, and we use these frequently when there are lots of parties to coordinate.
-In large organisations, teams are created and leaders chosen for those teams precisely to mitigate this [Communication Risk](Communication-Risk.md). We're all familiar with this: control of the team is ceded to the leader, who takes on the role of 'handing down' direction from above, but also 'reporting up' issues that cannot be resolved within the team. In Vroom and Yetton's model, this is moving from a **GII** or **CII** to an **AI** or **AII** style of leadership.
+In large organisations, teams are created and leaders chosen for those teams precisely to mitigate this [Communication Risk](/tags/Communication-Risk). We're all familiar with this: control of the team is ceded to the leader, who takes on the role of 'handing down' direction from above, but also 'reporting up' issues that cannot be resolved within the team. In Vroom and Yetton's model, this is moving from a **GII** or **CII** to an **AI** or **AII** style of leadership.
Clearly, this is just a _model_, it's not set in stone and decision making styles usually change from day-to-day and decision-to-decision. The same is not true in our software - _rules are rules_.
## In Software Processes
-It should be pretty clear that we are applying our [Scale Invariance](../thinking/Crisis-Mode.md#invariance-2-scale-invariance) rule to [Coordination Risk](Coordination-Risk.md): all of the problems we've described as affecting teams and organisations also affect software, although the scale and terrain are different. Software processes have limited _agency_ - in most cases they follow fixed rules set down by the programmers, rather than self-organising like people can (so far).
+It should be pretty clear that we are applying our [Scale Invariance](/thinking/Crisis-Mode.md#invariance-2-scale-invariance) rule to [Coordination Risk](/tags/Coordination-Risk): all of the problems we've described as affecting teams and organisations also affect software, although the scale and terrain are different. Software processes have limited _agency_ - in most cases they follow fixed rules set down by the programmers, rather than self-organising like people can (so far).
-As before, in order to face [Coordination Risk](Coordination-Risk.md) in software, we need multiple agents all working together. [Coordination Risks](Coordination-Risk.md) (such as race conditions or deadlock) only really occur where _more than one agent working at the same time_. This means we are considering _at least_ multi-threaded software, and anything above that (multiple CPUs, servers, data-centres and so on).
+As before, in order to face [Coordination Risk](/tags/Coordination-Risk) in software, we need multiple agents all working together. [Coordination Risks](/tags/Coordination-Risk) (such as race conditions or deadlock) only really occur where _more than one agent working at the same time_. This means we are considering _at least_ multi-threaded software, and anything above that (multiple CPUs, servers, data-centres and so on).
### CAP Theorem
@@ -216,11 +216,11 @@ To be consistent, Agent 2 needs to check with Agent 1 to make sure it has the la
![In an CA system, we can't have partition tolerance, so in order to be consistent a single Agent has to do all the work](/img/generated/risks/coordination/cap-ca.png)
-Finally, if we have a CA system, we are essentially saying that _only one agent is doing the work_. (You can't partition a single agent, after all). But this leads to [Resource Allocation](https://en.wikipedia.org/wiki/Resource_allocation) and **Contention** around use of the scarce resource of `Agent 2`'s attention. (Both [Coordination Risk](Coordination-Risk.md) issues we met earlier.)
+Finally, if we have a CA system, we are essentially saying that _only one agent is doing the work_. (You can't partition a single agent, after all). But this leads to [Resource Allocation](https://en.wikipedia.org/wiki/Resource_allocation) and **Contention** around use of the scarce resource of `Agent 2`'s attention. (Both [Coordination Risk](/tags/Coordination-Risk) issues we met earlier.)
### Some Real-Life Examples
-This sets a lower bound on [Coordination Risk](Coordination-Risk.md): we _can't_ get rid of it completely in a software system, -or- a system on any other scale. Fundamentally, coordination problems are inescapable at some level. The best we can do is mitigate it by agreeing on protocols and doing lots of communication.
+This sets a lower bound on [Coordination Risk](/tags/Coordination-Risk): we _can't_ get rid of it completely in a software system, -or- a system on any other scale. Fundamentally, coordination problems are inescapable at some level. The best we can do is mitigate it by agreeing on protocols and doing lots of communication.
Let's look at some real-life examples of how this manifests in software.
@@ -236,7 +236,7 @@ ZooKeeper handles this by communicating inter-agent with its own protocol. It e
Second, [Git](https://en.wikipedia.org/wiki/Git) is a (mainly) write-only ledger of source changes. However, as we already discussed above, where different agents make incompatible changes, someone has to decide how to resolve the conflicts so that we have a single source of truth.
-The [Coordination Risk](Coordination-Risk.md) just _doesn't go away_.
+The [Coordination Risk](/tags/Coordination-Risk) just _doesn't go away_.
Since multiple users can make all the changes they like locally, and merge them later, Git is an `AP` system where everyone's opinion counts (**GII**): individual users may have _wildly_ different ideas about what the source looks like until the merge is complete.
@@ -244,22 +244,22 @@ Since multiple users can make all the changes they like locally, and merge them
Finally, [Bitcoin (BTC)](https://en.wikipedia.org/wiki/Bitcoin) is a write-only [distributed ledger](https://en.wikipedia.org/wiki/Distributed_ledger), where agents _compete_ to mine BTC (a **UI** style organisation), but also at the same time record transactions on the ledger. BTC is also `AP`, in a similar way to Git. But new changes can only be appended if you have the latest version of the ledger. If you append to an out-of-date ledger, your work will be lost.
-Because it's based on outright competition, if someone beats you to completing a mining task, then your work is wasted. So, there is _huge_ [Coordination Risk](Coordination-Risk.md).
+Because it's based on outright competition, if someone beats you to completing a mining task, then your work is wasted. So, there is _huge_ [Coordination Risk](/tags/Coordination-Risk).
-For this reason, BTC agents [coordinate](Coordination-Risk.md) into [mining consortia](https://en.bitcoin.it/wiki/Comparison_of_mining_pools), so they can avoid working on the same tasks at the same time, turning it into a **CI**-type organisation.
+For this reason, BTC agents [coordinate](/tags/Coordination-Risk) into [mining consortia](https://en.bitcoin.it/wiki/Comparison_of_mining_pools), so they can avoid working on the same tasks at the same time, turning it into a **CI**-type organisation.
This in itself is a problem because the whole _point_ of BTC is that it's competitive and no one entity has control. So, mining pools tend to stop growing before they reach 50% of the BTC network's processing power. Taking control would be [politically disastrous](https://www.reddit.com/r/Bitcoin/comments/5fe9vz/in_the_last_24hrs_three_mining_pools_have_control/) and confidence in the currency (such as there is) would likely be lost.
## Communication Is For Coordination
-CAP theory gives us a fundamental limit on how much [Coordination Risk](Coordination-Risk.md) we can mitigate. We've looked at different organisational structures used to manage [Coordination Risk](Coordination-Risk.md) within teams of people, organisations or living organisms, so it's the case in software.
+CAP theory gives us a fundamental limit on how much [Coordination Risk](/tags/Coordination-Risk) we can mitigate. We've looked at different organisational structures used to manage [Coordination Risk](/tags/Coordination-Risk) within teams of people, organisations or living organisms, so it's the case in software.
-At the start of this section, we questioned whether [Coordination Risk](Coordination-Risk.md) was just another type of [Communication Risk](Communication-Risk.md). However, it should be clear after looking at the examples of competition, cellular life and Vroom and Yetton's Model that this is exactly _backwards_.
+At the start of this section, we questioned whether [Coordination Risk](/tags/Coordination-Risk) was just another type of [Communication Risk](/tags/Communication-Risk). However, it should be clear after looking at the examples of competition, cellular life and Vroom and Yetton's Model that this is exactly _backwards_.
- Most single-celled life has no need for communication: it simply competes for the available resources. If it lacks anything it needs, it dies.
- There are _no_ lines of communication on the **UI** decision-type. It's only when we want to avoid competition, by sharing resources and working towards common goals that we need to communicate.
- Therefore, the whole point of communication _is for coordination_.
-In the next section, [Map And Territory Risk](Map-And-Territory-Risk.md), we're going to look at some new ways in which systems can fail, despite their attempts to coordinate.
+In the next section, [Map And Territory Risk](/tags/Map-And-Territory-Risk), we're going to look at some new ways in which systems can fail, despite their attempts to coordinate.
\ No newline at end of file
diff --git a/docs/risks/Dependency-Risks/Agency-Risk.md b/docs/risks/Dependency-Risks/Agency-Risks/Agency-Risk.md
similarity index 72%
rename from docs/risks/Dependency-Risks/Agency-Risk.md
rename to docs/risks/Dependency-Risks/Agency-Risks/Agency-Risk.md
index 5a61d5435..bb5d8817b 100644
--- a/docs/risks/Dependency-Risks/Agency-Risk.md
+++ b/docs/risks/Dependency-Risks/Agency-Risks/Agency-Risk.md
@@ -2,7 +2,7 @@
title: Agency Risk
description: People all have their own agendas. What do you do about that?
-
+slug: /risks/Agency-Risk
tags:
- Risks
- Goal
@@ -17,31 +17,31 @@ part_of: Dependency Risk
-Coordinating a team is difficult enough when everyone on the team has a single [Goal](../thinking/Glossary.md#goal). But people have their own goals too. Sometimes their goals harmlessly co-exist with the team's goal, other times they don't.
+Coordinating a team is difficult enough when everyone on the team has a single [Goal](/thinking/Glossary.md#goal). But people have their own goals too. Sometimes their goals harmlessly co-exist with the team's goal, other times they don't.
-This is [Agency Risk](Agency-Risk.md).
+This is [Agency Risk](/tags/Agency-Risk).
![Agency Risk](/img/generated/risks/agency/agency-risk.png)
-In this section, we are going to take a closer look at how [Agency Risk](Agency-Risk.md) arises, in particular we will:
+In this section, we are going to take a closer look at how [Agency Risk](/tags/Agency-Risk) arises, in particular we will:
- - apply the concept of [Agency Risk](Agency-Risk.md) in software development
- - define a model for understanding [Agency Risk](Agency-Risk.md)
- - look at some common issues in software development, and analyse how they have their roots in [Agency Risk](Agency-Risk.md)
- - look at how [Agency Risk](Agency-Risk.md) applies to not just to people, but _whole teams_ and _software agents_
- - look at the various ways to mitigate [Agency Risk](Agency-Risk.md), irrespective of what type of agent we are looking at. (We'll specifically consider _software agents_, _humans_ and _cells in the body_.)
+ - apply the concept of [Agency Risk](/tags/Agency-Risk) in software development
+ - define a model for understanding [Agency Risk](/tags/Agency-Risk)
+ - look at some common issues in software development, and analyse how they have their roots in [Agency Risk](/tags/Agency-Risk)
+ - look at how [Agency Risk](/tags/Agency-Risk) applies to not just to people, but _whole teams_ and _software agents_
+ - look at the various ways to mitigate [Agency Risk](/tags/Agency-Risk), irrespective of what type of agent we are looking at. (We'll specifically consider _software agents_, _humans_ and _cells in the body_.)
## Agency In Software Development
-To introduce [Agency Risk](Agency-Risk.md), let's first look at the **Principal-Agent Dilemma**. This term comes from finance and refers to the situation where you (the "principal") entrust your money to someone (the "agent") in order to invest it, but they don't necessarily have your best interests at heart. They may instead elect to invest the money in ways that help them, or outright steal it.
+To introduce [Agency Risk](/tags/Agency-Risk), let's first look at the **Principal-Agent Dilemma**. This term comes from finance and refers to the situation where you (the "principal") entrust your money to someone (the "agent") in order to invest it, but they don't necessarily have your best interests at heart. They may instead elect to invest the money in ways that help them, or outright steal it.
> "This dilemma exists in circumstances where agents are motivated to act in their own best interests, which are contrary to those of their principals, and is an example of moral hazard." - [Principal-Agent Problem, _Wikipedia_](https://en.wikipedia.org/wiki/Principal–agent_problem)
The less visibility you have of the agent's activities, the bigger the risk. However, the _whole point_ of giving the money to the agent was that you would have to spend less time and effort managing it, hence the dilemma.
-In software development, we're not lending each other money, but we _are_ being paid by the project sponsor, so they are assuming [Agency Risk](Agency-Risk.md) by employing us.
+In software development, we're not lending each other money, but we _are_ being paid by the project sponsor, so they are assuming [Agency Risk](/tags/Agency-Risk) by employing us.
-[Agency Risk](Agency-Risk.md) doesn't just apply to people: it can apply to _running software_ or _whole teams_ - anything which has agency over its actions.
+[Agency Risk](/tags/Agency-Risk) doesn't just apply to people: it can apply to _running software_ or _whole teams_ - anything which has agency over its actions.
> "Agency is the capacity of an actor to act in a given environment... Agency may either be classified as unconscious, involuntary behaviour, or purposeful, goal directed activity (intentional action). " - [Agency, _Wikipedia_](https://en.wikipedia.org/wiki/Agency_(philosophy))
@@ -49,7 +49,7 @@ In software development, we're not lending each other money, but we _are_ being
![Goal Hierarchy](/img/generated/risks/agency/hierarchy.png)
-Although the definition of [Agency Risk](Agency-Risk.md) above pertains to looking after other people's money, this is just a single example of a wider issue which is best understood by appreciating that humans have a _hierarchy of concern_ with respect to their goals, as shown in the diagram above. This hierarchy has arisen from millennia of evolution and helps us prioritise competing goals, generally in favour of _preserving our genes_.
+Although the definition of [Agency Risk](/tags/Agency-Risk) above pertains to looking after other people's money, this is just a single example of a wider issue which is best understood by appreciating that humans have a _hierarchy of concern_ with respect to their goals, as shown in the diagram above. This hierarchy has arisen from millennia of evolution and helps us prioritise competing goals, generally in favour of _preserving our genes_.
The model above helps us explain the principal-agent problem: when faced with the dilemma of self-interest (perhaps protecting their family) vs. their employer, they will choose their family. But it goes further - this model explains a lot of human behaviour. It explains why some people:
@@ -57,7 +57,7 @@ The model above helps us explain the principal-agent problem: when faced with t
- love their pets (who they consider in the _immediate family_ group) but eat other animals (somewhere off the bottom).
- why people can be fiercely _nationalistic_ and tribal (supporting the goals of the third level) and also be against _immigration_ (helping people in the fourth level).
-[Agency Risk](Agency-Risk.md) clearly includes the behaviour of [Bad Actors](https://en.wiktionary.org/wiki/bad_actor) but is not limited to them: there are various "shades of grey" involved. We can often understand and sympathise with the decisions agents make based on an understanding of this hierarchy.
+[Agency Risk](/tags/Agency-Risk) clearly includes the behaviour of [Bad Actors](https://en.wiktionary.org/wiki/bad_actor) but is not limited to them: there are various "shades of grey" involved. We can often understand and sympathise with the decisions agents make based on an understanding of this hierarchy.
**NB:** Don't get hung up on the fact the diagram only has four levels. You might want to add other levels in their depending on your personal circumstances. The take-away is that there is a hierarchy at all, and that at the top, the people/things we care about _most_ are few in number.
@@ -67,7 +67,7 @@ We shouldn't expect people on a project to sacrifice their personal lives for th
> "Game development... requires long working hours and dedication... Some video game developers (such as Electronic Arts) have been accused of the excessive invocation of 'crunch time'. 'Crunch time' is the point at which the team is thought to be failing to achieve milestones needed to launch a game on schedule. " - [Crunch Time, _Wikipedia_](https://en.wikipedia.org/wiki/Video_game_developer#"Crunch_time")
-People taking time off, going to funerals, looking after sick relatives and so on are all acceptable forms of [Agency Risk](Agency-Risk.md). They are the a risk of having _staff_ rather than _slaves_.
+People taking time off, going to funerals, looking after sick relatives and so on are all acceptable forms of [Agency Risk](/tags/Agency-Risk). They are the a risk of having _staff_ rather than _slaves_.
![Heroism](/img/generated/risks/agency/heroism.png)
@@ -86,11 +86,11 @@ Sometimes projects don't get done without heroes. But other times, the hero has
- A desire for recognition and acclaim from colleagues.
- For the job security of being a [Key Person](https://en.wikipedia.org/wiki/Key_person_insurance).
-A team _can_ make use of heroism but it's a double-edged sword. The hero can become [a bottleneck](Coordination-Risk.md) to work getting done and because they want to solve all the problems themselves, they [under-communicate](Communication-Risk.md).
+A team _can_ make use of heroism but it's a double-edged sword. The hero can become [a bottleneck](/tags/Coordination-Risk) to work getting done and because they want to solve all the problems themselves, they [under-communicate](/tags/Communication-Risk).
### CV Building
-CV Building is when someone decides that the project needs a dose of "Some Technology X", but in actual fact, this is either completely unhelpful to the project (incurring large amounts of [Complexity Risk](Complexity-Risk.md)), or merely a poor alternative to something else.
+CV Building is when someone decides that the project needs a dose of "Some Technology X", but in actual fact, this is either completely unhelpful to the project (incurring large amounts of [Complexity Risk](/tags/Complexity-Risk)), or merely a poor alternative to something else.
It's very easy to spot CV building: look for choices of technology that are incongruently complex compared to the problem they solve and then challenge by suggesting a simpler alternative.
@@ -98,7 +98,7 @@ It's very easy to spot CV building: look for choices of technology that are inc
Heroes can be useful, but _underused_ project members are a nightmare. The problem is, people who are not fully occupied begin to worry that actually the team would be better off without them, and then wonder if their jobs are at risk.
-Even if they don't worry about their jobs, sometimes they need ways to stave off _boredom_. The solution to this is "busy-work": finding tasks that, at first sight, look useful, and then delivering them in an over-elaborate way that'll keep them occupied. This is also known as [_Gold Plating_](https://en.wikipedia.org/wiki/Gold_plating_(software_engineering)). This will leave you with more [Complexity Risk](Complexity-Risk.md) than you had in the first place.
+Even if they don't worry about their jobs, sometimes they need ways to stave off _boredom_. The solution to this is "busy-work": finding tasks that, at first sight, look useful, and then delivering them in an over-elaborate way that'll keep them occupied. This is also known as [_Gold Plating_](https://en.wikipedia.org/wiki/Gold_plating_(software_engineering)). This will leave you with more [Complexity Risk](/tags/Complexity-Risk) than you had in the first place.
### Pet Projects
@@ -114,7 +114,7 @@ Working on a pet project usually means you get lots of attention (and more than
> "Morale, also known as Esprit de Corps, is the capacity of a group's members to retain belief in an institution or goal, particularly in the face of opposition or hardship" - [Morale, _Wikipedia_](https://en.wikipedia.org/wiki/Morale)
-Sometimes the morale of the team or individuals within it dips, leading to lack of motivation. Losing morale is a kind of [Agency Risk](Agency-Risk.md) because it really means that a team member or the whole team isn't committed to the [Goal](../thinking/Glossary.md#goal) and may decide their efforts are best spent elsewhere. Morale failure might be caused by:
+Sometimes the morale of the team or individuals within it dips, leading to lack of motivation. Losing morale is a kind of [Agency Risk](/tags/Agency-Risk) because it really means that a team member or the whole team isn't committed to the [Goal](/thinking/Glossary.md#goal) and may decide their efforts are best spent elsewhere. Morale failure might be caused by:
- **External Factors**: perhaps the employee's dog has died, or they're simply tired of the industry, or are not feeling challenged.
- **The goal feels unachievable**: in this case people won't commit their full effort to it. This might be due to a difference in the evaluation of the risks on the project between the team members and the leader. In military science, a second meaning of morale is how well supplied and equipped a unit is. This would also seem like a useful reference point for IT projects. If teams are under-staffed or under-equipped, it will impact on motivation too.
@@ -136,7 +136,7 @@ Given the fluidity of the goal hierarchy for people, we shouldn't be surprised t
![Software Goals](/img/generated/risks/agency/software.png)
-Compared to humans, most software has a simple goal hierarchy, as shown in the diagram above. Nevertheless, there is significant [Agency Risk](Agency-Risk.md) in running software _at all_. Since computer systems follow rules we set for them, we shouldn't be surprised when those rules have exceptions that lead to disaster. For example:
+Compared to humans, most software has a simple goal hierarchy, as shown in the diagram above. Nevertheless, there is significant [Agency Risk](/tags/Agency-Risk) in running software _at all_. Since computer systems follow rules we set for them, we shouldn't be surprised when those rules have exceptions that lead to disaster. For example:
- A process continually writing log files until the disks fill up, crashing the system.
- Bugs causing data to get corrupted, causing financial loss.
@@ -156,19 +156,19 @@ This problem may be a long way off. In any case it's not really in our interest
### Teams
-[Agency Risk](Agency-Risk.md) applies to _whole teams_ too. It's perfectly possible that a team within an organisation develops [Goals](../thinking/Glossary.md#goal) that don't align with those of the overall organisation. For example:
+[Agency Risk](/tags/Agency-Risk) applies to _whole teams_ too. It's perfectly possible that a team within an organisation develops [Goals](/thinking/Glossary.md#goal) that don't align with those of the overall organisation. For example:
- A team introduces excessive [Bureaucracy](Process-Risk.md#bureaucracy) in order to avoid work it doesn't like.
- A team gets obsessed with a particular technology, or their own internal process improvement, at the expense of delivering business value.
- A marginalised team forces their services on other teams in the name of "consistency". (This can happen a lot with "Architecture", "Branding" and "Testing" teams, sometimes for the better, sometimes for the worse.)
-When you work with an external consultancy, there is *always* more [Agency Risk](Agency-Risk.md) than with a direct employee. This is because as well as your goals and the employee's goals, there is also the consultancy's goals.
+When you work with an external consultancy, there is *always* more [Agency Risk](/tags/Agency-Risk) than with a direct employee. This is because as well as your goals and the employee's goals, there is also the consultancy's goals.
This is a good argument for avoiding consultancies, but sometimes the technical expertise they bring can outweigh this risk.
## Mitigating Agency Risk
-Let's look at three common ways to mitigate [Agency Risk](Agency-Risk.md): [Monitoring](#monitoring), [Security](#security) and [Goal Alignment](#goal-alignment). Let's start with Monitoring.
+Let's look at three common ways to mitigate [Agency Risk](/tags/Agency-Risk): [Monitoring](#monitoring), [Security](#security) and [Goal Alignment](#goal-alignment). Let's start with Monitoring.
### Monitoring
@@ -176,7 +176,7 @@ Let's look at three common ways to mitigate [Agency Risk](Agency-Risk.md): [Mon
A the core of the Principal-Agent Problem is the issue that we _want_ our agents to do work for us so we don't have the responsibility of doing it ourselves. However, we pick up the second-order responsibility of managing the agents instead.
-As a result (and as shown in the above diagram), we need to _Monitor_ the agents. The price of mitigating [Agency Risk](Agency-Risk.md) this way is that we have to spend time doing the monitoring ([Schedule Risk](Scarcity-Risk.md#schedule-risk)) and we have to understand what the agents are doing ([Complexity Risk](Complexity-Risk.md)).
+As a result (and as shown in the above diagram), we need to _Monitor_ the agents. The price of mitigating [Agency Risk](/tags/Agency-Risk) this way is that we have to spend time doing the monitoring ([Schedule Risk](/tags/Schedule-Risk)) and we have to understand what the agents are doing ([Complexity Risk](/tags/Complexity-Risk)).
Monitoring of _software process_ agents is an important part of designing reliable systems and it makes perfect sense that this would also apply to _human_ agents too. But for people, the _knowledge of being monitored_ can instil corrective behaviour. This is known as the Hawthorne Effect:
@@ -188,14 +188,14 @@ Security is all about _setting limits_ on agency - both within and outside a sys
![Related Risks](/img/generated/risks/agency/agency-risks.png)
-_Within_ a system we may wish to prevent our agents from causing accidental (or deliberate) harm but we also have [Agency Risk](Agency-Risk.md) from unwanted agents _outside_ the system. So security is also about ensuring that the environment we work in is _safe_ for the good actors to operate in while keeping out the bad actors.
+_Within_ a system we may wish to prevent our agents from causing accidental (or deliberate) harm but we also have [Agency Risk](/tags/Agency-Risk) from unwanted agents _outside_ the system. So security is also about ensuring that the environment we work in is _safe_ for the good actors to operate in while keeping out the bad actors.
Interestingly, security is handled in very similar ways in all kinds of systems, whether biological, human or institutional:
- **Walls**: defences _around_ the system, to protect its parts from the external environment.
- **Doors**: ways to get _in_ and _out_ of the system, possibly with _locks_.
- **Guards**: to make sure only the right things go in and out. (i.e. to try and keep out _bad actors_).
-- **Police**: to defend from _within_ the system against internal [Agency Risk](Agency-Risk.md).
+- **Police**: to defend from _within_ the system against internal [Agency Risk](/tags/Agency-Risk).
- **Subterfuge**: hiding, camouflage, disguises, pretending to be something else.
These work at various levels in **our own bodies**: our _cells_ have _cell walls_ around them, and _cell membranes_ that act as the guards to allow things in and out. Our _bodies_ have _skin_ to keep the world out, and we have _mouths_, _eyes_, _pores_ and so on to allow things in and out. We have an _immune system_ to act as the police.
@@ -206,15 +206,15 @@ We're waking up to the realisation that our software systems need to work the sa
![Security as a mitigation for Agency Risk](/img/generated/risks/agency/security-risk.png)
-[Agency Risk](Agency-Risk.md) and [Security Risk](Agency-Risk.md#security) thrive on complexity: the more complex the systems we create, the more opportunities there are for bad actors to insert themselves and extract their own value. The dilemma is, _increasing security_ also means increasing [Complexity Risk](Complexity-Risk.md), because secure systems are necessarily more complex than insecure ones.
+[Agency Risk](/tags/Agency-Risk) and [Security Risk](Agency-Risk.md#security) thrive on complexity: the more complex the systems we create, the more opportunities there are for bad actors to insert themselves and extract their own value. The dilemma is, _increasing security_ also means increasing [Complexity Risk](/tags/Complexity-Risk), because secure systems are necessarily more complex than insecure ones.
### Goal Alignment
-As we stated at the beginning, [Agency Risk](Agency-Risk.md) at any level comes down to differences of [Goals](../thinking/Glossary.md#goal) between the different agents, whether they are _people_, _teams_ or _software_.
+As we stated at the beginning, [Agency Risk](/tags/Agency-Risk) at any level comes down to differences of [Goals](/thinking/Glossary.md#goal) between the different agents, whether they are _people_, _teams_ or _software_.
#### Skin In The Game
-If you can _align the goals_ of the agents involved, you can mitigate [Agency Risk](Agency-Risk.md). Nassim Nicholas Taleb calls this "skin in the game": that is, the agent is exposed to the same risks as the principal.
+If you can _align the goals_ of the agents involved, you can mitigate [Agency Risk](/tags/Agency-Risk). Nassim Nicholas Taleb calls this "skin in the game": that is, the agent is exposed to the same risks as the principal.
> "Which brings us to the largest fragilizer of society, and greatest generator of crises, absence of 'skin in the game.' Some become antifragile at the expense of others by getting the upside (or gains) from volatility, variations, and disorder and exposing others to the downside risks of losses or harm." - [Nassim Nicholas Taleb, _Antifragile_](https://a.co/d/07LfBTI)
@@ -224,7 +224,7 @@ Another example of this is [The Code of Hammurabi](https://en.wikipedia.org/wiki
> "The death of a homeowner in a house collapse necessitates the death of the house's builder... if the homeowner's son died, the builder's son must die also." - [Code of Hammurabi, _Wikipedia_](https://en.wikipedia.org/wiki/Code_of_Hammurabi#Theories_of_purpose)
-Luckily, these kinds of exposure aren't very common on software projects! [Fixed Price Contracts](../thinking/One-Size-Fits-No-One.md#waterfall) and [Employee Stock Options](https://en.wikipedia.org/wiki/Employee_stock_option) are two exceptions.
+Luckily, these kinds of exposure aren't very common on software projects! [Fixed Price Contracts](/thinking/One-Size-Fits-No-One.md#waterfall) and [Employee Stock Options](https://en.wikipedia.org/wiki/Employee_stock_option) are two exceptions.
#### Needs Theory
@@ -232,25 +232,25 @@ David McClelland's Needs Theory suggests that there are two types of skin-in-the
> "Need theory... proposed by psychologist David McClelland, is a motivational model that attempts to explain how the needs for achievement, power, and affiliation affect the actions of people from a managerial context... McClelland stated that we all have these three types of motivation regardless of age, sex, race, or culture. The type of motivation by which each individual is driven derives from their life experiences and the opinions of their culture. " - [Need Theory, _Wikipedia_](https://en.wikipedia.org/wiki/Need_theory)
-So one mitigation for [Agency Risk](Agency-Risk.md) is therefore to employ these extrinsic factors. For example, by making individuals responsible and rewarded for the success or failure of projects, we can align their personal motivations with those of the project.
+So one mitigation for [Agency Risk](/tags/Agency-Risk) is therefore to employ these extrinsic factors. For example, by making individuals responsible and rewarded for the success or failure of projects, we can align their personal motivations with those of the project.
> "One key to success in a mission is establishing clear lines of blame." - [Henshaw's Law, _Akin's Laws Of Spacecraft Design_](https://spacecraft.ssl.umd.edu/akins_laws.html)
-But _extrinsic motivation_ is a complex, difficult-to-apply tool. In [Map And Territory Risk](Map-And-Territory-Risk.md) we will come back to this and look at the various ways in which it can go awry.
+But _extrinsic motivation_ is a complex, difficult-to-apply tool. In [Map And Territory Risk](/tags/Map-And-Territory-Risk) we will come back to this and look at the various ways in which it can go awry.
![Collective Code Ownership, Individual Responsibility](/img/generated/risks/agency/cco.png)
-Tools like [Pair Programming](https://en.wikipedia.org/wiki/Pair_programming) and [Collective Code Ownership](https://en.wikipedia.org/wiki/Collective_ownership) are about mitigating [Staff Risks](Scarcity-Risk.md#staff-risk) like [Key Person Risk](https://en.wikipedia.org/wiki/Key_person_insurance#Key_person_definition) and [Learning Curve Risk](Communication-Risk.md#learning-curve-risk), but these push in the opposite direction to _individual responsibility_.
+Tools like [Pair Programming](https://en.wikipedia.org/wiki/Pair_programming) and [Collective Code Ownership](https://en.wikipedia.org/wiki/Collective_ownership) are about mitigating [Staff Risks](Scarcity-Risk.md#staff-risk) like [Key Person Risk](https://en.wikipedia.org/wiki/Key_person_insurance#Key_person_definition) and [Learning Curve Risk](/tags/Learning-Curve-Risk), but these push in the opposite direction to _individual responsibility_.
-This is an important consideration: in adopting _those_ tools, you are necessarily setting aside certain _other_ tools to manage [Agency Risk](Agency-Risk.md) as a result.
+This is an important consideration: in adopting _those_ tools, you are necessarily setting aside certain _other_ tools to manage [Agency Risk](/tags/Agency-Risk) as a result.
## Wrapping Up
-We've looked at various different shades of [Agency Risk](Agency-Risk.md) and three different mitigations for it. [Agency Risk](Agency-Risk.md) is a concern at the level of _individual agents_, whether they are processes, people, systems or teams.
+We've looked at various different shades of [Agency Risk](/tags/Agency-Risk) and three different mitigations for it. [Agency Risk](/tags/Agency-Risk) is a concern at the level of _individual agents_, whether they are processes, people, systems or teams.
-So having looked at agents _individually_, it's time to look more closely at [Goals](../thinking/Glossary.md#goal), and the [Attendant Risks](../thinking/Glossary.md#attendant-risk) when aligning them amongst multiple agents.
+So having looked at agents _individually_, it's time to look more closely at [Goals](/thinking/Glossary.md#goal), and the [Attendant Risks](/thinking/Glossary.md#attendant-risk) when aligning them amongst multiple agents.
-On to [Coordination Risk](Coordination-Risk.md)...
+On to [Coordination Risk](/tags/Coordination-Risk)...
\ No newline at end of file
diff --git a/docs/risks/Dependency-Risks/Agency-Risks/Security-Risk.md b/docs/risks/Dependency-Risks/Agency-Risks/Security-Risk.md
new file mode 100644
index 000000000..e69de29bb
diff --git a/docs/risks/Dependency-Risks/Boundary-Risk.md b/docs/risks/Dependency-Risks/Boundary-Risk.md
index 49f7585d9..477809398 100644
--- a/docs/risks/Dependency-Risks/Boundary-Risk.md
+++ b/docs/risks/Dependency-Risks/Boundary-Risk.md
@@ -16,34 +16,34 @@ part_of: Dependency Risk
-In the previous sections on [Dependency Risk](Dependency-Risk.md) we've touched on [Boundary Risk](Boundary-Risk.md) several times, but now it's time to tackle it head-on and discuss this important type of risk.
+In the previous sections on [Dependency Risk](/tags/Dependency-Risk) we've touched on [Boundary Risk](/tags/Boundary-Risk) several times, but now it's time to tackle it head-on and discuss this important type of risk.
![Boundary Risk is due to Dependency Risk and commitment](/img/generated/risks/boundary/boundary-risk.png)
-As shown in the above diagram, [Boundary Risk](Boundary-Risk.md) is the risk we face due to _commitments_ around dependencies and the limitations they place on our ability to change. To illustrate, lets consider two examples:
+As shown in the above diagram, [Boundary Risk](/tags/Boundary-Risk) is the risk we face due to _commitments_ around dependencies and the limitations they place on our ability to change. To illustrate, lets consider two examples:
-- Although I eat cereals for breakfast, I don't have [Boundary Risk](Boundary-Risk.md) on them. If the supermarket runs out of cereals when I go, I can just buy some other food and eat that.
-- However the hot water system in my house uses gas. If that's not available I can't just switch to using oil or solar without cost. There is [Boundary Risk](Boundary-Risk.md), but it's low because the supply of gas is plentiful and seems like it will stay that way.
+- Although I eat cereals for breakfast, I don't have [Boundary Risk](/tags/Boundary-Risk) on them. If the supermarket runs out of cereals when I go, I can just buy some other food and eat that.
+- However the hot water system in my house uses gas. If that's not available I can't just switch to using oil or solar without cost. There is [Boundary Risk](/tags/Boundary-Risk), but it's low because the supply of gas is plentiful and seems like it will stay that way.
-In terms of the [Risk Landscape](Risk-Landscape.md), [Boundary Risk](Boundary-Risk.md) is exactly as it says: a _boundary_, _wall_ or other kind of obstacle in your way to making a move you want to make. This changes the nature of the [Risk Landscape](../thinking/Glossary.md#risk-landscape), and introduces a maze-like component to it. It also means that we have to make _commitments_ about which way to go, knowing that our future paths are constrained by the decisions we make.
+In terms of the [Risk Landscape](Risk-Landscape.md), [Boundary Risk](/tags/Boundary-Risk) is exactly as it says: a _boundary_, _wall_ or other kind of obstacle in your way to making a move you want to make. This changes the nature of the [Risk Landscape](/thinking/Glossary.md#risk-landscape), and introduces a maze-like component to it. It also means that we have to make _commitments_ about which way to go, knowing that our future paths are constrained by the decisions we make.
-As we discussed in [Complexity Risk](Complexity-Risk.md), there is always the chance we end up at a [Dead End](Complexity-Risk.md#dead-end-risk), having done work that we need to throw away. In this case, we'll have to head back and make a different decision.
+As we discussed in [Complexity Risk](/tags/Complexity-Risk), there is always the chance we end up at a [Dead End](/tags/Dead-End-Risk), having done work that we need to throw away. In this case, we'll have to head back and make a different decision.
## In Software Development
-In software development, although we might face [Boundary Risk](Boundary-Risk.md) choosing staff or offices, most of the everyday dependency commitments we have to make are around _abstractions_.
+In software development, although we might face [Boundary Risk](/tags/Boundary-Risk) choosing staff or offices, most of the everyday dependency commitments we have to make are around _abstractions_.
-As discussed in [Software Dependency Risk](Software-Dependency-Risk.md), if we are going to use a software tool as a dependency, we have to accept the complexity of its [protocols](Communication-Risk.md#protocols). You have to use its protocol: it won't come to you.
+As discussed in [Software Dependency Risk](/tags/Software-Dependency-Risk), if we are going to use a software tool as a dependency, we have to accept the complexity of its [protocols](Communication-Risk.md#protocols). You have to use its protocol: it won't come to you.
![Our System receives data from the `input`, translates it and sends it to the `output`. But which dependency should we use for the translation, if any?](/img/generated/risks/boundary/choices.png)
Let's take a look at a hypothetical system structure, in the diagram above. In this design, we are transforming data from the `input` to the `output`. But how should we do it?
- - We could transform via library 'a', using the [Protocols](Communication-Risk.md#protocol-risk) of 'a', and having a dependency on 'a'.
- - We could use library 'b', using the [Protocols](Communication-Risk.md#protocol-risk) of 'b', and having a dependency on 'b'.
- - We could use neither, and avoid the dependency, but potentially pick up lots more [Codebase Risk](Complexity-Risk.md#codebase-risk) and [Schedule Risk](Scarcity-Risk.md#schedule-risk) because we have to code our own alternative to 'a' and 'b'.
+ - We could transform via library 'a', using the [Protocols](/tags/Protocol-Risk) of 'a', and having a dependency on 'a'.
+ - We could use library 'b', using the [Protocols](/tags/Protocol-Risk) of 'b', and having a dependency on 'b'.
+ - We could use neither, and avoid the dependency, but potentially pick up lots more [Codebase Risk](/tags/Codebase-Risk) and [Schedule Risk](/tags/Schedule-Risk) because we have to code our own alternative to 'a' and 'b'.
-The choice of approach presents us with [Boundary Risk](Boundary-Risk.md) because we don't know that we'll necessarily be successful with any of these options until we _go down the path_ of committing to one:
+The choice of approach presents us with [Boundary Risk](/tags/Boundary-Risk) because we don't know that we'll necessarily be successful with any of these options until we _go down the path_ of committing to one:
- Maybe 'a' has some undocumented drawbacks that are going to hold us up.
- Maybe 'b' works on some streaming API basis, that is incompatible with the input protocol.
@@ -51,16 +51,16 @@ The choice of approach presents us with [Boundary Risk](Boundary-Risk.md) becaus
... and so on.
-This is a toy example, but in real life this dilemma occurs when we choose between database vendors, cloud hosting platforms, operating systems, software libraries etc. and it was a big factor in our analysis of [Software Dependency Risk](Software-Dependency-Risk.md).
+This is a toy example, but in real life this dilemma occurs when we choose between database vendors, cloud hosting platforms, operating systems, software libraries etc. and it was a big factor in our analysis of [Software Dependency Risk](/tags/Software-Dependency-Risk).
## Factors In Boundary Risk
-The degree of [Boundary Risk](Boundary-Risk.md) is dependent on a number of factors:
+The degree of [Boundary Risk](/tags/Boundary-Risk) is dependent on a number of factors:
- - **The Sunk Cost** of the [Learning Curve](Communication-Risk.md#learning-curve-risk) we've overcome to integrate the dependency, which may fail to live up to expectations (_cf._ [Feature Fit Risks](Feature-Risk.md#feature-fit-risk)). We can avoid accreting this by choosing the _simplest_ and _fewest_ dependencies for any task, and trying to [Meet Reality](../thinking/Meeting-Reality.md) quickly.
- - **Life Expectancy**: libraries and products come and go. A choice that was popular when it was made may be superseded in the future by something better. (_cf._ [Market Risk](Feature-Risk.md#market-risk)). This may not be a problem until you try to renew a support contract, or try to do an operating system update. Although no-one can predict how long a technology will last, [The Lindy Effect](https://en.wikipedia.org/wiki/Lindy_effect) suggests that _future life expectancy is proportional to current age_. So, you might expect a technology that has been around for ten years to be around for a further ten.
+ - **The Sunk Cost** of the [Learning Curve](/tags/Learning-Curve-Risk) we've overcome to integrate the dependency, which may fail to live up to expectations (_cf._ [Feature Fit Risks](/tags/Feature-Fit-Risk)). We can avoid accreting this by choosing the _simplest_ and _fewest_ dependencies for any task, and trying to [Meet Reality](/thinking/Meeting-Reality.md) quickly.
+ - **Life Expectancy**: libraries and products come and go. A choice that was popular when it was made may be superseded in the future by something better. (_cf._ [Market Risk](/tags/Market-Risk)). This may not be a problem until you try to renew a support contract, or try to do an operating system update. Although no-one can predict how long a technology will last, [The Lindy Effect](https://en.wikipedia.org/wiki/Lindy_effect) suggests that _future life expectancy is proportional to current age_. So, you might expect a technology that has been around for ten years to be around for a further ten.
- **The level of [Lock In](#ecosystems-and-lock-in)**, where the cost of switching to a new dependency in the future is some function of the level of commitment to the current dependency. As an example, consider the level of commitment you have to your mother tongue. If you have spent your entire life committed to learning and communicating in English, there is a massive level of lock-in. Overcoming this to become fluent in Chinese may be an overwhelming task.
- - **Future needs**: will the dependency satisfy your expanding requirements going forward? (_cf._ [Feature Drift Risk](Feature-Risk.md#feature-drift-risk))
+ - **Future needs**: will the dependency satisfy your expanding requirements going forward? (_cf._ [Feature Drift Risk](/tags/Feature-Drift-Risk))
- **Ownership changes:** Microsoft buys [GitHub](https://en.wikipedia.org/wiki/GitHub). What will happen to the ecosystem around GitHub now?
- **Licensing changes:** (e.g. [Oracle](https://oracle.com) buys Tangosol who make [Coherence](https://en.wikipedia.org/wiki/Oracle_Coherence) for example). Having done this, they increase the licensing costs of Coherence to huge levels, milking the [Cash Cow](https://en.wikipedia.org/wiki/Cash_cow) of the installed user-base, but ensuring no-one else is likely to commit to it in the future.
@@ -79,11 +79,11 @@ But crucially, the underlying abstractions of WordPress and Drupal are different
> "... a set of businesses functioning as a unit and interacting with a shared market for software and services, together with relationships among them. These relationships are frequently underpinned by a common technological platform and operate through the exchange of information, resources, and artifacts." - [Software Ecosystem, _Wikipedia_](https://en.wikipedia.org/wiki/Software_ecosystem)
-You can think of the ecosystem as being like the footprint of a town or a city, consisting of the buildings, transport network and the people that live there. Within the city, and because of the transport network and the amenities available, it's easy to make rapid, useful moves on the [Risk Landscape](Risk-Landscape.md). In a software ecosystem it's the same: the ecosystem has gathered together to provide a way to mitigate various different [Feature Risks](Feature-Risk.md) in a common way.
+You can think of the ecosystem as being like the footprint of a town or a city, consisting of the buildings, transport network and the people that live there. Within the city, and because of the transport network and the amenities available, it's easy to make rapid, useful moves on the [Risk Landscape](Risk-Landscape.md). In a software ecosystem it's the same: the ecosystem has gathered together to provide a way to mitigate various different [Feature Risks](/tags/Feature-Risk) in a common way.
-Ecosystem size is one key determinant of [Boundary Risk](Boundary-Risk.md):
+Ecosystem size is one key determinant of [Boundary Risk](/tags/Boundary-Risk):
-- **A large ecosystem** has a large boundary circumference. [Boundary Risk](Boundary-Risk.md) is lower in a large ecosystem because your moves on the [Risk Landscape](../thinking/Glossary.md#risk-landscape) are unlikely to collide with it. The boundary _got large_ because other developers before you hit the boundary and did the work building the software equivalents of bridges and roads and pushing it back so that the boundary didn't get in their way.
+- **A large ecosystem** has a large boundary circumference. [Boundary Risk](/tags/Boundary-Risk) is lower in a large ecosystem because your moves on the [Risk Landscape](/thinking/Glossary.md#risk-landscape) are unlikely to collide with it. The boundary _got large_ because other developers before you hit the boundary and did the work building the software equivalents of bridges and roads and pushing it back so that the boundary didn't get in their way.
- In **a small ecosystem**, you are much more likely to come into contact with the edges of the boundary. _You_ will have to be the developer that pushes back the frontier and builds the roads for the others. This is hard work.
### Big Ecosystems Get Bigger
@@ -117,7 +117,7 @@ When a tool or platform is popular, it is under pressure to increase in complexi
> "The Peter principle is a concept in management developed by Laurence J. Peter, which observes that people in a hierarchy tend to rise to their 'level of incompetence'." - [The Peter Principle, _Wikipedia_](https://en.wikipedia.org/wiki/Peter_principle)
-Although designed for _people_, it can just as easily be applied to any other dependency you can think of. This means when things get popular, there is a tendency towards [Conceptual Integrity Risk](Feature-Risk.md#conceptual-integrity-risk) and [Complexity Risk](Complexity-Risk.md).
+Although designed for _people_, it can just as easily be applied to any other dependency you can think of. This means when things get popular, there is a tendency towards [Conceptual Integrity Risk](/tags/Conceptual-Integrity-Risk) and [Complexity Risk](/tags/Complexity-Risk).
![Java Public Classes By Version (3-9)](/img/numbers/java_classes_by_version.png)
@@ -125,19 +125,19 @@ The above chart is an example of this: look at how the number of public classes
#### Backward Compatibility
-As we saw in [Software Dependency Risk](Software-Dependency-Risk.md), The art of good design is to afford the greatest increase in functionality with the smallest increase in complexity possible, and this usually means [Refactoring](https://en.wikipedia.org/wiki/Refactoring). But, this is at odds with [Backward Compatibility](Communication-Risk.md#backward-compatibility).
+As we saw in [Software Dependency Risk](/tags/Software-Dependency-Risk), The art of good design is to afford the greatest increase in functionality with the smallest increase in complexity possible, and this usually means [Refactoring](https://en.wikipedia.org/wiki/Refactoring). But, this is at odds with [Backward Compatibility](Communication-Risk.md#backward-compatibility).
-Each new version has a greater functional scope than the one before (pushing back [Boundary Risk](Boundary-Risk.md)), making the platform more attractive to build solutions in. But this increases the [Complexity Risk](Complexity-Risk.md) as there is more functionality to deal with.
+Each new version has a greater functional scope than the one before (pushing back [Boundary Risk](/tags/Boundary-Risk)), making the platform more attractive to build solutions in. But this increases the [Complexity Risk](/tags/Complexity-Risk) as there is more functionality to deal with.
![Tradeoff between large and small ecosystems](/img/generated/risks/boundary/boundary-risk2.png)
-You can see in the diagram above the Peter Principle at play: as more responsibility is given to a dependency, the more complex it gets and the greater the learning curve to work with it. Large ecosystems like Java react to [Learning Curve Risk](Communication-Risk.md#learning-curve-risk) by having copious amounts of literature to read or buy to help, but it is still off-putting.
+You can see in the diagram above the Peter Principle at play: as more responsibility is given to a dependency, the more complex it gets and the greater the learning curve to work with it. Large ecosystems like Java react to [Learning Curve Risk](/tags/Learning-Curve-Risk) by having copious amounts of literature to read or buy to help, but it is still off-putting.
-Because [Complexity is Mass](Complexity-Risk.md#complexity-is-mass), large ecosystems can't respond quickly to [Feature Drift](Feature-Risk.md#feature-drift-risk). This means that when the world changes, new ecosystems are likely to appear to fill gaps, rather than old ones moving in.
+Because [Complexity is Mass](Complexity-Risk.md#complexity-is-mass), large ecosystems can't respond quickly to [Feature Drift](/tags/Feature-Drift-Risk). This means that when the world changes, new ecosystems are likely to appear to fill gaps, rather than old ones moving in.
## Managing Boundary Risk
-Let's look at two ways in which we can manage [Boundary Risk](Boundary-Risk.md): _bridges_ and _standards_.
+Let's look at two ways in which we can manage [Boundary Risk](/tags/Boundary-Risk): _bridges_ and _standards_.
### Ecosystem Bridges
@@ -147,7 +147,7 @@ Sometimes, technology comes along that allows us to cross boundaries, like a _br
I find, a lot of code I write is of this nature: trying to write the _glue code_ to join together two different _ecosystems_.
-As shown in the above diagram, mitigating [Boundary Risk](Boundary-Risk.md) involves taking on complexity. The more [Protocol Complexity](Communication-Risk.md#protocol-risk) there is on either side of the two ecosystems, the more [complex](Complexity-Risk.md) the bridge will necessarily be. The below table shows some examples of this.
+As shown in the above diagram, mitigating [Boundary Risk](/tags/Boundary-Risk) involves taking on complexity. The more [Protocol Complexity](/tags/Protocol-Risk) there is on either side of the two ecosystems, the more [complex](/tags/Complexity-Risk) the bridge will necessarily be. The below table shows some examples of this.
|Protocol Risk From A |Protocol Risk From B |Resulting Bridge Complexity |Example |
|-----------------------------|----------------------------|-----------------------------|---------------------------------------------------------|
@@ -157,14 +157,14 @@ As shown in the above diagram, mitigating [Boundary Risk](Boundary-Risk.md) invo
-From examining the [Protocol Risk](Communication-Risk.md#protocol-risk) at each end of the bridge you are creating, you can get a rough idea of how complex the endeavour will be:
+From examining the [Protocol Risk](/tags/Protocol-Risk) at each end of the bridge you are creating, you can get a rough idea of how complex the endeavour will be:
- If it's low-risk at both ends, you're probably going to be able to knock it out easily. Like translating a date, or converting one file format to another.
- Where one of the protocols is _evolving_, you're definitely going to need to keep releasing new versions. The functionality of a `Calculator` app on my phone remains the same, but new versions have to be released as the phone APIs change, screens change resolution and so on.
### Standards
-Standards mitigate [Boundary Risk](Boundary-Risk.md) in one of two ways:
+Standards mitigate [Boundary Risk](/tags/Boundary-Risk) in one of two ways:
1. **Abstract over the ecosystems.** Provide a _standard_ protocol (a _lingua franca_) which can be converted down into the protocol of any of a number of competing ecosystems.
@@ -174,17 +174,17 @@ Standards mitigate [Boundary Risk](Boundary-Risk.md) in one of two ways:
2. **Force adoption.** All of the ecosystems start using the standard for fear of being left out in the cold. Sometimes, a standards body is involved, but other times a "de facto" standard emerges that everyone adopts.
- - [ASCII](https://en.wikipedia.org/wiki/ASCII): fixed the different-character-sets [Boundary Risk](Boundary-Risk.md) by being a standard that others could adopt. Before everyone agreed on ASCII, copying data from one computer system to another was a massive pain, and would involve some kind of translation. [Unicode](https://en.wikipedia.org/wiki/Unicode) continues this work.
+ - [ASCII](https://en.wikipedia.org/wiki/ASCII): fixed the different-character-sets [Boundary Risk](/tags/Boundary-Risk) by being a standard that others could adopt. Before everyone agreed on ASCII, copying data from one computer system to another was a massive pain, and would involve some kind of translation. [Unicode](https://en.wikipedia.org/wiki/Unicode) continues this work.
- - [Internet Protocol](https://en.wikipedia.org/wiki/Internet_Protocol). As we saw in [Communication Risk](Communication-Risk.md#protocol-risk), the Internet Protocol (IP) is the _lingua franca_ of the modern Internet. However, at one period of time, there were many competing standards. IP was the ecosystem that "won", and was subsequently standardised by the [IETF](https://en.wikipedia.org/wiki/Internet_Engineering_Task_Force). This is actually an example of _both_ approaches: as we saw in [Communication Risk](Communication-Risk.md), Internet Protocol is also an abstraction over lower-level protocols.
+ - [Internet Protocol](https://en.wikipedia.org/wiki/Internet_Protocol). As we saw in [Communication Risk](/tags/Protocol-Risk), the Internet Protocol (IP) is the _lingua franca_ of the modern Internet. However, at one period of time, there were many competing standards. IP was the ecosystem that "won", and was subsequently standardised by the [IETF](https://en.wikipedia.org/wiki/Internet_Engineering_Task_Force). This is actually an example of _both_ approaches: as we saw in [Communication Risk](/tags/Communication-Risk), Internet Protocol is also an abstraction over lower-level protocols.
## Boundary Risk Cycle
![Boundary Risk Decreases With Bridges and Standards](/img/generated/risks/boundary/cycle.png)
-[Boundary Risk](Boundary-Risk.md) seems to progress in cycles. As a piece of technology becomes more mature, there are more standards and bridges, and [Boundary Risk](Boundary-Risk.md) is lower. Once [Boundary Risk](Boundary-Risk.md) is low and a particular approach is proven, there will be innovation upon this, giving rise to new opportunities for [Boundary Risk](Boundary-Risk.md) (see the diagram above). Here are some examples:
+[Boundary Risk](/tags/Boundary-Risk) seems to progress in cycles. As a piece of technology becomes more mature, there are more standards and bridges, and [Boundary Risk](/tags/Boundary-Risk) is lower. Once [Boundary Risk](/tags/Boundary-Risk) is low and a particular approach is proven, there will be innovation upon this, giving rise to new opportunities for [Boundary Risk](/tags/Boundary-Risk) (see the diagram above). Here are some examples:
- - **Processor Chips.** By providing features (instructions) on their processors that other vendors didn't have, manufacturers made their processors more attractive to system integrators. However, since the instructions were different on different chips, this created [Boundary Risk](Boundary-Risk.md) for the integrators. Intel and Microsoft were able to use this fact to build a big ecosystem around Windows running on Intel chips (so called, WinTel). The Intel instruction set is nowadays a _de-facto_ standard for PCs.
+ - **Processor Chips.** By providing features (instructions) on their processors that other vendors didn't have, manufacturers made their processors more attractive to system integrators. However, since the instructions were different on different chips, this created [Boundary Risk](/tags/Boundary-Risk) for the integrators. Intel and Microsoft were able to use this fact to build a big ecosystem around Windows running on Intel chips (so called, WinTel). The Intel instruction set is nowadays a _de-facto_ standard for PCs.
- **Browsers.** In the late 1990s, faced with the emergence of the nascent World Wide Web, and the [Netscape Navigator](https://en.wikipedia.org/wiki/Netscape_Navigator) browser, [Microsoft](https://en.wikipedia.org/wiki/Microsoft) adopted a strategy known as [Embrace and Extend](https://en.wikipedia.org/wiki/Embrace_and_extend). The idea of this was to subvert the HTML standard to their own ends by _embracing_ the standard and creating their own browser Internet Explorer and then _extending_ it with as much functionality as possible, which would then _not work_ in Netscape Navigator. They then embarked on a campaign to try and get everyone to "upgrade" to Internet Explorer. In this way, they hoped to "own" the Internet, or at least, the software of the browser, which they saw as analogous to being the "operating system" of the Internet, and therefore a threat to their own operating system, [Windows](https://en.wikipedia.org/wiki/Microsoft_Windows).
@@ -195,31 +195,31 @@ Standards mitigate [Boundary Risk](Boundary-Risk.md) in one of two ways:
## Everyday Boundary Risks
-Although ecosystems are one very pernicious type of boundary in software development, it's worth pointing out that [Boundary Risk](Boundary-Risk.md) occurs all the time. Let's look at some ways:
+Although ecosystems are one very pernicious type of boundary in software development, it's worth pointing out that [Boundary Risk](/tags/Boundary-Risk) occurs all the time. Let's look at some ways:
- **Configuration**. When software has to be deployed onto a server, there has to be configuration (usually on the command line, or via configuration property files) in order to bridge the boundary between the _environment it's running in_ and the _software being run_. Often this is setting up file locations, security keys and passwords, and telling it where to find other files and services.
-- **Integration Testing**. Building a unit test is easy. You are generally testing some code you have written, aided with a testing framework. Your code and the framework are both written in the same language, which means low [Boundary Risk](Boundary-Risk.md). But to _integration test_ you need to step outside this boundary and so it becomes much harder. This is true whether you are integrating with other systems (providing or supplying them with data) or parts of your own system (say testing the client-side and server parts together).
+- **Integration Testing**. Building a unit test is easy. You are generally testing some code you have written, aided with a testing framework. Your code and the framework are both written in the same language, which means low [Boundary Risk](/tags/Boundary-Risk). But to _integration test_ you need to step outside this boundary and so it becomes much harder. This is true whether you are integrating with other systems (providing or supplying them with data) or parts of your own system (say testing the client-side and server parts together).
-- **User Interface Testing**. The interface with the user is a complex, under-specified risky [protocol](Communication-Risk.md#protocol-risk). Although tools exist to automate UI testing (such as [Selenium](https://en.wikipedia.org/wiki/Selenium_(software)), these rarely satisfactorily mitigate this [protocol risk](Communication-Risk.md#protocol-risk): can you be sure that the screen hasn't got strange glitches, that the mouse moves correctly, that the proportions on the screen are correct on all browsers?
+- **User Interface Testing**. The interface with the user is a complex, under-specified risky [protocol](/tags/Protocol-Risk). Although tools exist to automate UI testing (such as [Selenium](https://en.wikipedia.org/wiki/Selenium_(software)), these rarely satisfactorily mitigate this [protocol risk](/tags/Protocol-Risk): can you be sure that the screen hasn't got strange glitches, that the mouse moves correctly, that the proportions on the screen are correct on all browsers?
- **Jobs**. When you pick a new technology to learn and add to your CV, it's worth keeping in mind how useful this will be to you in the future. It's career-limiting to be stuck in a dying ecosystem with the need to retrain.
-- **Teams**. if you're asked to build a new tool for an existing team, are you creating [Boundary Risk](Boundary-Risk.md) by using tools that the team aren't familiar with?
+- **Teams**. if you're asked to build a new tool for an existing team, are you creating [Boundary Risk](/tags/Boundary-Risk) by using tools that the team aren't familiar with?
-- **Organisations**. Getting teams or departments to work with each other often involves breaking down [Boundary Risk](Boundary-Risk.md). Often the departments use different tool-sets or processes, and have different goals making the translation harder.
+- **Organisations**. Getting teams or departments to work with each other often involves breaking down [Boundary Risk](/tags/Boundary-Risk). Often the departments use different tool-sets or processes, and have different goals making the translation harder.
## Patterns In Boundary Risk
-In [Feature Risk](Feature-Risk.md#feature-drift-risk), we saw that the features people need change over time. Let's get more specific about this:
+In [Feature Risk](/tags/Feature-Drift-Risk), we saw that the features people need change over time. Let's get more specific about this:
- **Human need is [Fractal](https://en.wikipedia.org/wiki/Fractal)**: this means that over time, software products have evolved to more closely map human needs. Software that would have delighted us ten years ago lacks the sophistication we expect today.
- **Software and hardware are both improving with time**: due to evolution and the ability to support greater and greater levels of complexity.
-- **Abstractions accrete too**: as we saw in [Process Risk](Process-Risk.md), we _encapsulate_ earlier abstractions in order to build later ones.
+- **Abstractions accrete too**: as we saw in [Process Risk](/tags/Process-Risk), we _encapsulate_ earlier abstractions in order to build later ones.
The only thing we can expect in the future is that the lifespan of any ecosystem will follow the arc shown in the above diagram, through creation, adoption, growth, use and finally either be abstracted over or abandoned.
Although our discipline is a young one, we should probably expect to see "Software Archaeology" in the same way as we see it for biological organisms. Already we can see the dead-ends in the software evolutionary tree: COBOL and BASIC languages, CASE systems. Languages like FORTH live on in PostScript, SQL is still embedded in everything
-Let's move on now to the last [Dependency Risk](Dependency-Risk.md) section, and look at [Agency Risk](Agency-Risk.md).
+Let's move on now to the last [Dependency Risk](/tags/Dependency-Risk) section, and look at [Agency Risk](/tags/Agency-Risk).
diff --git a/docs/risks/Dependency-Risks/Deadline-Risk.md b/docs/risks/Dependency-Risks/Deadline-Risk.md
index 246cbd99a..663ec3901 100644
--- a/docs/risks/Dependency-Risks/Deadline-Risk.md
+++ b/docs/risks/Dependency-Risks/Deadline-Risk.md
@@ -27,33 +27,33 @@ In the first example, you can't _start_ something until a particular event happe
## Events Mitigate Risk...
-Having an event occur in a fixed time and place is [mitigating risk](../thinking/Glossary.md#mitigated-risk):
+Having an event occur in a fixed time and place is [mitigating risk](/thinking/Glossary.md#mitigated-risk):
-- By taking the bus, we are mitigating our own [Schedule Risk](Scarcity-Risk.md#schedule-risk): we're (hopefully) reducing the amount of time we're going to spend on the activity of getting to work. It's not entirely necessary to even take the bus: you could walk, or go by another form of transport. But, effectively, this just swaps one dependency for another: if you walk, this might well take longer and use more energy, so you're just picking up [Schedule Risk](Scarcity-Risk.md#schedule-risk) in another way.
-- Events are a mitigation for [Coordination Risk](Coordination-Risk.md): a bus needn't necessarily _have_ a fixed timetable. It could wait for each passenger until they turned up, and then go. (A bit like ride-sharing works). This would be a total disaster from a [Coordination Risk](Coordination-Risk.md) perspective, as one person could cause everyone else to be really really late.
-- If you drive, you have a dependency on your car instead. So, there is often an _opportunity cost_ with dependencies. Using the bus might be a cheaper way to travel, so you're picking up less [Fuding Risk](Scarcity-Risk.md#funding-risk) by using it.
+- By taking the bus, we are mitigating our own [Schedule Risk](/tags/Schedule-Risk): we're (hopefully) reducing the amount of time we're going to spend on the activity of getting to work. It's not entirely necessary to even take the bus: you could walk, or go by another form of transport. But, effectively, this just swaps one dependency for another: if you walk, this might well take longer and use more energy, so you're just picking up [Schedule Risk](/tags/Schedule-Risk) in another way.
+- Events are a mitigation for [Coordination Risk](/tags/Coordination-Risk): a bus needn't necessarily _have_ a fixed timetable. It could wait for each passenger until they turned up, and then go. (A bit like ride-sharing works). This would be a total disaster from a [Coordination Risk](/tags/Coordination-Risk) perspective, as one person could cause everyone else to be really really late.
+- If you drive, you have a dependency on your car instead. So, there is often an _opportunity cost_ with dependencies. Using the bus might be a cheaper way to travel, so you're picking up less [Fuding Risk](/tags/Funding-Risk) by using it.
## But, Events Lead To Attendant Risk
![Action Diagram showing risks mitigated by having an _event_](/img/generated/risks/deadline/dependency-risk-event.png)
-By _deciding to use the bus_ we've [Taken Action](../thinking/Glossary.md#taking-action). By agreeing a _time_ and _place_ for something to happen (creating an _event_, as shown in the diagram above), you're introducing [Deadline Risk](Deadline-Risk.md). Miss the deadline, and you miss the bus.
+By _deciding to use the bus_ we've [Taken Action](/thinking/Glossary.md#taking-action). By agreeing a _time_ and _place_ for something to happen (creating an _event_, as shown in the diagram above), you're introducing [Deadline Risk](/tags/Deadline-Risk). Miss the deadline, and you miss the bus.
-As discussed above, _schedules_ (such as bus timetables) exist so that _two or more parties can coordinate_, and [Deadline Risk](Deadline-Risk.md) is on _all_ of the parties. While there's a risk I am late, there's also a risk the bus is late. I might miss the start of a concert, or the band might keep everyone waiting.
+As discussed above, _schedules_ (such as bus timetables) exist so that _two or more parties can coordinate_, and [Deadline Risk](/tags/Deadline-Risk) is on _all_ of the parties. While there's a risk I am late, there's also a risk the bus is late. I might miss the start of a concert, or the band might keep everyone waiting.
-In software development, deadlines are set in order to _coordinate work between teams_. For example, having a product ready in production at the same time as the marketing campaign starts. Fixing on an agreed deadline mitigates inter-team [Coordination Risk](Coordination-Risk.md).
+In software development, deadlines are set in order to _coordinate work between teams_. For example, having a product ready in production at the same time as the marketing campaign starts. Fixing on an agreed deadline mitigates inter-team [Coordination Risk](/tags/Coordination-Risk).
## Slack
-Each party can mitigate [Deadline Risk](Deadline-Risk.md) with _slack_. That is, ensuring that the exact time of the event isn't critical to your plans:
+Each party can mitigate [Deadline Risk](/tags/Deadline-Risk) with _slack_. That is, ensuring that the exact time of the event isn't critical to your plans:
- Don't build into your plans a _need_ to start shopping at 9am.
- Arrive at the bus-stop _early_.
The amount of slack you build into the schedule is likely dependent on the level of risk you face: I tend to arrive a few minutes early for a bus, because the risk is _low_ (there'll be another bus along soon). However, I try to arrive over an hour early for a flight, because I can't simply get on the next flight straight away and I've already paid for it, so the risk is _high_.
-[Deadline Risk](Deadline-Risk.md) becomes very hard to manage when you have to coordinate actions with lots of tightly-constrained events. So what else can give? We can reduce the number of _parties_ involved in the event, which reduces risk, or, we can make sure all the parties are in the same _place_ to begin with.
+[Deadline Risk](/tags/Deadline-Risk) becomes very hard to manage when you have to coordinate actions with lots of tightly-constrained events. So what else can give? We can reduce the number of _parties_ involved in the event, which reduces risk, or, we can make sure all the parties are in the same _place_ to begin with.
## Focus
@@ -65,28 +65,28 @@ What happens if you miss the deadline? It could be:
- You have to go back to a budgeting committee to get more money.
- Members of the team get replaced because of lack of faith.
-.. or something else. So [Deadline Risk](Deadline-Risk.md) can be introduced by an authority in order to _sharpen focus_. This is how we arrive at tools like [SMART Objectives](https://en.wikipedia.org/wiki/SMART_criteria) and [KPI's (Key Performance Indicators)](https://en.wikipedia.org/wiki/Performance_indicator).
+.. or something else. So [Deadline Risk](/tags/Deadline-Risk) can be introduced by an authority in order to _sharpen focus_. This is how we arrive at tools like [SMART Objectives](https://en.wikipedia.org/wiki/SMART_criteria) and [KPI's (Key Performance Indicators)](https://en.wikipedia.org/wiki/Performance_indicator).
-Deadlines change the way we evaluate goals and the solutions we choose because they force us to reckon with [Deadline Risk](Deadline-Risk.md). For example, in JFK's quote:
+Deadlines change the way we evaluate goals and the solutions we choose because they force us to reckon with [Deadline Risk](/tags/Deadline-Risk). For example, in JFK's quote:
> "First, I believe that this nation should commit itself to achieving the goal, before this decade is out, of landing a man on the moon and returning him safely to the Earth." - John F. Kennedy, 1961
-The 9-year timespan came from an authority figure (the president) and helped a huge team of people coordinate their efforts and arrive at a solution that would work within a given time-frame. The [Deadline Risk](Deadline-Risk.md) allowed the team to focus on mitigating the risk of missing that deadline.
+The 9-year timespan came from an authority figure (the president) and helped a huge team of people coordinate their efforts and arrive at a solution that would work within a given time-frame. The [Deadline Risk](/tags/Deadline-Risk) allowed the team to focus on mitigating the risk of missing that deadline.
Compare with this quote:
> “I love deadlines. I love the whooshing noise they make as they go by.” - [Douglas Adams](https://en.wikipedia.org/wiki/Douglas_Adams)
-As a successful author, Douglas Adams _didn't really care_ about the deadlines his publisher's gave him. The [Deadline Risk](Deadline-Risk.md) was minimal for him, because the publisher wouldn't be able to give his project to someone else to complete.
+As a successful author, Douglas Adams _didn't really care_ about the deadlines his publisher's gave him. The [Deadline Risk](/tags/Deadline-Risk) was minimal for him, because the publisher wouldn't be able to give his project to someone else to complete.
## Deadline Risk and Schedule Risk
-[Schedule Risk](Scarcity-Risk.md#schedule-risk) and [Deadline Risk](Deadline-Risk.md) are clearly related: they both refer to the risk of running out of time. However, the _risk profile_ of each is very different:
+[Schedule Risk](/tags/Schedule-Risk) and [Deadline Risk](/tags/Deadline-Risk) are clearly related: they both refer to the risk of running out of time. However, the _risk profile_ of each is very different:
- - [Schedule Risk](Scarcity-Risk.md#schedule-risk) is _continuous_, like money. i.e. you want to waste as little of it as possible. Every extra day you take compounds [Schedule Risk](Scarcity-Risk.md#schedule-risk) additively. A day wasted at the start of the project is much the same as a day wasted at the end.
- - [Deadline Risk](Deadline-Risk.md) is _binary_. The impact of [Deadline Risk](Deadline-Risk.md) is either zero (you make it in time) or one (you are late and miss the flight). You don't particularly get a reward for being early.
+ - [Schedule Risk](/tags/Schedule-Risk) is _continuous_, like money. i.e. you want to waste as little of it as possible. Every extra day you take compounds [Schedule Risk](/tags/Schedule-Risk) additively. A day wasted at the start of the project is much the same as a day wasted at the end.
+ - [Deadline Risk](/tags/Deadline-Risk) is _binary_. The impact of [Deadline Risk](/tags/Deadline-Risk) is either zero (you make it in time) or one (you are late and miss the flight). You don't particularly get a reward for being early.
-So, these are two separate concepts, both useful in software development and other fields. Next up, something more specific: [Software Dependency Risk](Software-Dependency-Risk.md).
+So, these are two separate concepts, both useful in software development and other fields. Next up, something more specific: [Software Dependency Risk](/tags/Software-Dependency-Risk).
diff --git a/docs/risks/Dependency-Risks/Dependency-Risk.md b/docs/risks/Dependency-Risks/Dependency-Risk.md
index f96a3e650..5a0dd4d96 100644
--- a/docs/risks/Dependency-Risks/Dependency-Risk.md
+++ b/docs/risks/Dependency-Risks/Dependency-Risk.md
@@ -19,7 +19,7 @@ part_of: Operational Risk
# Dependency Risks
-[Dependency Risk](Dependency-Risk.md) is the risk you take on whenever you have a dependency on something (or someone) else.
+[Dependency Risk](/tags/Dependency-Risk) is the risk you take on whenever you have a dependency on something (or someone) else.
One simple example could be that the software service you write might depend on hardware to run on: if the server goes down, the service goes down too. In turn, the server depends on electricity from a supplier, as well as a network connection from a provider. If either of these dependencies aren't met, the service is out of commission.
@@ -27,13 +27,13 @@ Dependencies can be on _events_, _people_, _teams_, _work_, _processes_, _softwa
In order to avoid repetition, and also to break down this large topic, we're going to look at this over 7 sections:
- - This first section will look at dependencies _in general_, and some of the variations of [Dependency Risk](Dependency-Risk.md).
- - Next, we'll look at [Scarcity Risk](Scarcity-Risk.md), because time, money and staff are scarce resources in every project.
- - We'll cover [Deadline Risk](Deadline-Risk.md), and discuss the purpose of Events and Deadlines, and how they enable us to coordinate around dependency use.
- - Then, we'll move on to look specifically at [Software Dependency Risk](Software-Dependency-Risk.md), covering using libraries, software services and building on top of the work of others.
- - Then, we'll take a look at [Process Risk](Process-Risk.md), which is still [Dependency Risk](Dependency-Risk.md), but we'll be considering more organisational factors and how bureaucracy comes into the picture.
- - After that, we'll take a closer look at [Boundary Risk](Boundary-Risk.md) and [Dead-End Risk](Complexity-Risk.md#dead-end-risk). These are the risks you face in making choices about what to depend on.
- - Finally, we'll wrap up this analysis with a look at some of the specific problems around depending on other people or businesses in [Agency Risk](Agency-Risk.md).
+ - This first section will look at dependencies _in general_, and some of the variations of [Dependency Risk](/tags/Dependency-Risk).
+ - Next, we'll look at [Scarcity Risk](/tags/Scarcity-Risk), because time, money and staff are scarce resources in every project.
+ - We'll cover [Deadline Risk](/tags/Deadline-Risk), and discuss the purpose of Events and Deadlines, and how they enable us to coordinate around dependency use.
+ - Then, we'll move on to look specifically at [Software Dependency Risk](/tags/Software-Dependency-Risk), covering using libraries, software services and building on top of the work of others.
+ - Then, we'll take a look at [Process Risk](/tags/Process-Risk), which is still [Dependency Risk](/tags/Dependency-Risk), but we'll be considering more organisational factors and how bureaucracy comes into the picture.
+ - After that, we'll take a closer look at [Boundary Risk](/tags/Boundary-Risk) and [Dead-End Risk](/tags/Dead-End-Risk). These are the risks you face in making choices about what to depend on.
+ - Finally, we'll wrap up this analysis with a look at some of the specific problems around depending on other people or businesses in [Agency Risk](/tags/Agency-Risk).
## Why Have Dependencies?
@@ -43,7 +43,7 @@ This isn't even lucky though: life has adapted to build dependencies on things t
Although life exists at the bottom of the ocean around [hydrothermal vents](https://en.wikipedia.org/wiki/Hydrothermal_vent), it is a very different kind of life to ours and has a different set of dependencies given its circumstances.
-This tells us a lot about [Dependency Risk](Dependency-Risk.md) right here:
+This tells us a lot about [Dependency Risk](/tags/Dependency-Risk) right here:
- On the one hand, _depending on something_ is very often helpful, and quite often essential. (For example, all life seem to depend on water).
- Successful organisms _adapt_ to the dependencies available to them (like the thermal vent creatures).
@@ -56,26 +56,26 @@ Let's look at four types of risk that apply to every dependency: Fit, Reliabili
## Fit Risk
-In order to illustrate some of the different [Dependency Risks](Dependency-Risk.md), let's introduce a running example: trying to get to work each day. There are probably a few alternative ways to make your journey each day, such as _by car_, _walking_ or _by bus_. These are all alternative dependencies but give you the same _feature_: they'll get you there.
+In order to illustrate some of the different [Dependency Risks](/tags/Dependency-Risk), let's introduce a running example: trying to get to work each day. There are probably a few alternative ways to make your journey each day, such as _by car_, _walking_ or _by bus_. These are all alternative dependencies but give you the same _feature_: they'll get you there.
-Normally, we'll use the same dependency each day. This speaks to the fact that each of these approaches has different [Feature Fit Risk](Feature-Risk.md#feature-fit-risk). Perhaps you choose going by bus over going by car because of the risk that owning the car is expensive, or that you might not be able to find somewhere to park it.
+Normally, we'll use the same dependency each day. This speaks to the fact that each of these approaches has different [Feature Fit Risk](/tags/Feature-Fit-Risk). Perhaps you choose going by bus over going by car because of the risk that owning the car is expensive, or that you might not be able to find somewhere to park it.
![Two-Dimensions of Feature Fit for the bus-ride](/img/generated/risks/dependency/dependency-risk-fit.png)
-But there are a couple of problems with buses you don't have with your own car, as shown in the above diagram. A bus might take you to lots of in-between places you _didn't_ want to go, which is [Conceptual Integrity Risk](Feature-Risk.md#conceptual-integrity-risk) and we saw this already in the section on [Feature Risk](Feature-Risk.md). Also, it might not go at the time you want it to, which is [Feature-Fit-Risk](Feature-Risk.md#feature-fit-risk).
+But there are a couple of problems with buses you don't have with your own car, as shown in the above diagram. A bus might take you to lots of in-between places you _didn't_ want to go, which is [Conceptual Integrity Risk](/tags/Conceptual-Integrity-Risk) and we saw this already in the section on [Feature Risk](/tags/Feature-Risk). Also, it might not go at the time you want it to, which is [Feature-Fit-Risk](/tags/Feature-Fit-Risk).
-What this shows us is that [Fit Risks](Feature-Risk.md#feature-fit-risk) are as much a problem for the suppliers of the dependency (the people running the bus service) as they are for the people (like you or I) _using_ the dependency.
+What this shows us is that [Fit Risks](/tags/Feature-Fit-Risk) are as much a problem for the suppliers of the dependency (the people running the bus service) as they are for the people (like you or I) _using_ the dependency.
## Invisibility Risk
Dependencies (like the bus) make life simpler for you by taking on complexity for you.
-In software, dependencies are a way to manage [Complexity Risk](Complexity-Risk.md). The reason for this is that a dependency gives you an [abstraction](../thinking/Glossary.md#abstraction): you no longer need to know _how_ to do something, (that's the job of the dependency), you just need to interact with the dependency properly to get the job done. Buses are _perfect_ for people who can't drive, after all.
+In software, dependencies are a way to manage [Complexity Risk](/tags/Complexity-Risk). The reason for this is that a dependency gives you an [abstraction](/thinking/Glossary.md#abstraction): you no longer need to know _how_ to do something, (that's the job of the dependency), you just need to interact with the dependency properly to get the job done. Buses are _perfect_ for people who can't drive, after all.
![Dependencies help with complexity risk, but come with their own attendant risks](/img/generated/risks/dependency/dependency-risk.png)
-But (as shown in the above diagram) this means that all of the issues of abstractions that we covered in [Communication Risk](Communication-Risk.md) apply. For example, there is [Invisibility Risk](Communication-Risk.md#invisibility-risk) because you probably don't have a full view of what the dependency is doing. Nowadays, bus stops have a digital "arrivals" board which gives you details of when the bus will arrive, and shops publish their opening hours online. But, abstraction always means the loss of detail (the bus might be two minutes away but could already be full).
+But (as shown in the above diagram) this means that all of the issues of abstractions that we covered in [Communication Risk](/tags/Communication-Risk) apply. For example, there is [Invisibility Risk](/tags/Invisibility-Risk) because you probably don't have a full view of what the dependency is doing. Nowadays, bus stops have a digital "arrivals" board which gives you details of when the bus will arrive, and shops publish their opening hours online. But, abstraction always means the loss of detail (the bus might be two minutes away but could already be full).
## Dependencies And Complexity
@@ -83,17 +83,17 @@ In Rich Hickey's talk, [Simple Made Easy](https://www.infoq.com/presentations/Si
But: living systems are not simple. Not anymore. They evolved in the direction of increasing complexity because life was _easier_ that way. In the "simpler" direction, life is first _harder_ and then _impossible_, and then an evolutionary dead-end.
-Depending on things makes _your job easier_. But the [Complexity Risk](Complexity-Risk.md) hasn't gone away: it's just _transferred_ to the dependency. It's just [division of labour](https://en.wikipedia.org/wiki/Division_of_labour) and dependency hierarchies, as we saw in [Complexity Risk](Complexity-Risk.md#hierarchies-and-modularisation).
+Depending on things makes _your job easier_. But the [Complexity Risk](/tags/Complexity-Risk) hasn't gone away: it's just _transferred_ to the dependency. It's just [division of labour](https://en.wikipedia.org/wiki/Division_of_labour) and dependency hierarchies, as we saw in [Complexity Risk](Complexity-Risk.md#hierarchies-and-modularisation).
Our economic system and our software systems exhibit the same tendency-towards-complexity. For example, the television in my house now is _vastly more complicated_ than the one in my home when I was a child. But, it contains much more functionality and consumes much less power and space.
## Managing Dependency Risk
-Arguably, managing [Dependency Risk](Dependency-Risk.md) is _what Project Managers do_. Their job is to meet the project's [Goal](../thinking/Glossary.md#goal) by organising the available dependencies into some kind of useful order.
+Arguably, managing [Dependency Risk](/tags/Dependency-Risk) is _what Project Managers do_. Their job is to meet the project's [Goal](/thinking/Glossary.md#goal) by organising the available dependencies into some kind of useful order.
-There are some tools for managing dependency risk: [Gantt Charts](https://en.wikipedia.org/wiki/Gantt_chart) for example, arrange work according to the capacity of the resources (i.e. dependencies) available, but also the _dependencies between the tasks_. If task **B** requires the outputs of task **A**, then clearly task **A** comes first and task **B** starts after it finishes. We'll look at this more in [Process Risk](Process-Risk.md).
+There are some tools for managing dependency risk: [Gantt Charts](https://en.wikipedia.org/wiki/Gantt_chart) for example, arrange work according to the capacity of the resources (i.e. dependencies) available, but also the _dependencies between the tasks_. If task **B** requires the outputs of task **A**, then clearly task **A** comes first and task **B** starts after it finishes. We'll look at this more in [Process Risk](/tags/Process-Risk).
-We'll look in more detail at project management in Part 3, later. But now, let's get into specifics with [Scarcity Risk](Scarcity-Risk.md).
+We'll look in more detail at project management in Part 3, later. But now, let's get into specifics with [Scarcity Risk](/tags/Scarcity-Risk).
## Types Of Dependency Risk
diff --git a/docs/risks/Dependency-Risks/Process-Risk.md b/docs/risks/Dependency-Risks/Process-Risk.md
index f176d1561..83de15760 100644
--- a/docs/risks/Dependency-Risks/Process-Risk.md
+++ b/docs/risks/Dependency-Risks/Process-Risk.md
@@ -16,7 +16,7 @@ part_of: Dependency Risk
-[Process Risk](Process-Risk.md) is the risk you take on whenever you embark on completing a _process_.
+[Process Risk](/tags/Process-Risk) is the risk you take on whenever you embark on completing a _process_.
> "**Process**: a process is a set of activities that interact to achieve a result." - [Process, _Wikipedia_](https://en.wikipedia.org/wiki/Process)
@@ -28,46 +28,46 @@ Processes commonly involve _forms_: if you're filling out a form (whether on pa
As the above diagram shows, process exists to mitigate other kinds of risk. For example:
- - **[Coordination Risk](Coordination-Risk.md)**: you can often use process to help people coordinate. For example, a [Production Line](https://en.wikipedia.org/wiki/Production_line) is a process where work being done by one person is pushed to the next person when it's done. A room booking process is designed to efficiently allocate meeting rooms.
- - **[Operational Risk](Operational-Risk.md)**: this encompasses the risk of people _not doing their job properly_. But, by having a process, (and asking, did this person follow the process?) you can draw a distinction between a process failure and a personnel failure. For example, accepting funds from a money launderer _could_ be a failure of a bank employee. But, if they followed the _process_, it's a failure of the [Process](Process-Risk.md) itself.
- - **[Complexity Risk](Complexity-Risk.md)**: working _within a process_ can reduce the amount of [Complexity](Complexity-Risk.md) you have to think about. We accept that processes are going to slow us down, but we appreciate the reduction in risk this brings. Clearly, the complexity hasn't gone away, but it's hidden within design of the process. For example, [McDonalds](https://en.wikipedia.org/wiki/McDonald's) tries to design its operation so that preparing each food item is a simple process to follow, reducing complexity (and training time) for the staff.
+ - **[Coordination Risk](/tags/Coordination-Risk)**: you can often use process to help people coordinate. For example, a [Production Line](https://en.wikipedia.org/wiki/Production_line) is a process where work being done by one person is pushed to the next person when it's done. A room booking process is designed to efficiently allocate meeting rooms.
+ - **[Operational Risk](/tags/Operational-Risk)**: this encompasses the risk of people _not doing their job properly_. But, by having a process, (and asking, did this person follow the process?) you can draw a distinction between a process failure and a personnel failure. For example, accepting funds from a money launderer _could_ be a failure of a bank employee. But, if they followed the _process_, it's a failure of the [Process](/tags/Process-Risk) itself.
+ - **[Complexity Risk](/tags/Complexity-Risk)**: working _within a process_ can reduce the amount of [Complexity](/tags/Complexity-Risk) you have to think about. We accept that processes are going to slow us down, but we appreciate the reduction in risk this brings. Clearly, the complexity hasn't gone away, but it's hidden within design of the process. For example, [McDonalds](https://en.wikipedia.org/wiki/McDonald's) tries to design its operation so that preparing each food item is a simple process to follow, reducing complexity (and training time) for the staff.
-These are all examples of [Risk Mitigation](../thinking/Glossary.md#mitigated-risk) for the _owners_ of the process. But often the _consumers_ of the process end up picking up [Process Risks](Process-Risk.md) as a result:
+These are all examples of [Risk Mitigation](/thinking/Glossary.md#mitigated-risk) for the _owners_ of the process. But often the _consumers_ of the process end up picking up [Process Risks](/tags/Process-Risk) as a result:
- - **[Invisibility Risk](Communication-Risk.md#invisibility-risk)**: it's often not possible to see how far along a process is to completion. Sometimes, you can do this to an extent. For example, when I send a package for delivery, I can see roughly how far it's got on the tracking website. But this is still less-than-complete information and is a representation of reality.
- - **[Dead-End Risk](Complexity-Risk.md#dead-end-risk)**: even if you have the right process, initiating a process has no guarantee that your efforts won't be wasted and you'll be back where you started from. The chances of this happening increase as you get further from the standard use-case for the process, and the sunk cost increases with the length of time the process takes to complete.
- - **[Feature Access Risk](Feature-Risk.md#feature-access-risk)**: processes generally handle the common stuff, but ignore the edge cases. For example, a form on a website might not be designed to be accessible to disabled people, or might only cater to some common subset of use-cases.
+ - **[Invisibility Risk](/tags/Invisibility-Risk)**: it's often not possible to see how far along a process is to completion. Sometimes, you can do this to an extent. For example, when I send a package for delivery, I can see roughly how far it's got on the tracking website. But this is still less-than-complete information and is a representation of reality.
+ - **[Dead-End Risk](/tags/Dead-End-Risk)**: even if you have the right process, initiating a process has no guarantee that your efforts won't be wasted and you'll be back where you started from. The chances of this happening increase as you get further from the standard use-case for the process, and the sunk cost increases with the length of time the process takes to complete.
+ - **[Feature Access Risk](/tags/Feature-Access-Risk)**: processes generally handle the common stuff, but ignore the edge cases. For example, a form on a website might not be designed to be accessible to disabled people, or might only cater to some common subset of use-cases.
![Process Risk, and its consequences, compared with Agency Risk](/img/generated/risks/process/process-risk.png)
-When we talk about "[Process Risk](Process-Risk.md)" we are really referring to these types of risks, arising from "following a set of instructions." Compare this with [Agency Risk](Agency-Risk.md) (which we will review in a forthcoming section), which is risks due to _not_ following the instructions, as shown in the above diagram . Let's look at two examples, how [Process Risk](Process-Risk.md) can lead to [Invisibility Risks](Communication-Risk.md#invisibility-risk) and [Agency Risk](Agency-Risk.md).
+When we talk about "[Process Risk](/tags/Process-Risk)" we are really referring to these types of risks, arising from "following a set of instructions." Compare this with [Agency Risk](/tags/Agency-Risk) (which we will review in a forthcoming section), which is risks due to _not_ following the instructions, as shown in the above diagram . Let's look at two examples, how [Process Risk](/tags/Process-Risk) can lead to [Invisibility Risks](/tags/Invisibility-Risk) and [Agency Risk](/tags/Agency-Risk).
### Processes And Invisibility Risk
-Processes tend to work well for the common cases because *practice makes perfect*, but they are really tested when unusual situations occur. Expanding processes to deal with edge-cases incurs [Complexity Risk](Complexity-Risk.md), so often it's better to try and have clear boundaries of what is "in" and "out" of the process' domain.
+Processes tend to work well for the common cases because *practice makes perfect*, but they are really tested when unusual situations occur. Expanding processes to deal with edge-cases incurs [Complexity Risk](/tags/Complexity-Risk), so often it's better to try and have clear boundaries of what is "in" and "out" of the process' domain.
-Sometimes, processes are _not_ used commonly. How can we rely on them anyway? Usually, the answer is to build in extra [feedback loops](../thinking/Glossary.md#feedback-loop):
+Sometimes, processes are _not_ used commonly. How can we rely on them anyway? Usually, the answer is to build in extra [feedback loops](/thinking/Glossary.md#feedback-loop):
- Testing that backups work, even when no backup is needed.
- Running through a disaster recovery scenario at the weekend.
- Increasing the release cadence, so that we practice the release process more.
-The feedback loops allow us to perform [Retrospectives and Reviews](../practices/Review.md) to improve our processes.
+The feedback loops allow us to perform [Retrospectives and Reviews](/tags/Approvals) to improve our processes.
### Processes, Sign-Offs and Agency Risk
-Often, Processes will include sign-off steps. The [Sign-Off](../practices/Sign-Off.md) is an interesting mechanism:
+Often, Processes will include sign-off steps. The [Sign-Off](/tags/Approvals) is an interesting mechanism:
- By signing off on something for the business, people are usually in some part staking their reputation on something being right.
- - Therefore, you would expect that sign-off involves a lot of [Agency Risk](Agency-Risk.md): people don't want to expose themselves in career-limiting ways.
+ - Therefore, you would expect that sign-off involves a lot of [Agency Risk](/tags/Agency-Risk): people don't want to expose themselves in career-limiting ways.
- Therefore, the bigger the risk they are being asked to swallow, the more cumbersome and protracted the sign-off process.
-Often, [Sign-Offs](../practices/Sign-Off.md) boil down to a balance of risk for the signer: on the one hand, _personal, career risk_ from signing off, on the other, the risk of upsetting the rest of the staff waiting for the sign-off, and the [Dead End Risk](Complexity-Risk.md#dead-end-risk) of all the effort gone into getting the sign-off if they don't.
+Often, [Sign-Offs](/tags/Approvals) boil down to a balance of risk for the signer: on the one hand, _personal, career risk_ from signing off, on the other, the risk of upsetting the rest of the staff waiting for the sign-off, and the [Dead End Risk](/tags/Dead-End-Risk) of all the effort gone into getting the sign-off if they don't.
This is a nasty situation, but there are a couple of ways to de-risk this:
- - Break [Sign-Offs](../practices/Sign-Off.md) down into bite-size chunks of risk that are acceptable to those doing the signing-off.
- - Agree far-in-advance the sign-off criteria. As discussed in [Risk Theory](../thinking/Evaluating-Risk.md), people have a habit of heavily discounting future risk, and it's much easier to get agreement on the _criteria_ than it is to get the sign-off.
+ - Break [Sign-Offs](/tags/Approvals) down into bite-size chunks of risk that are acceptable to those doing the signing-off.
+ - Agree far-in-advance the sign-off criteria. As discussed in [Risk Theory](/thinking/Evaluating-Risk.md), people have a habit of heavily discounting future risk, and it's much easier to get agreement on the _criteria_ than it is to get the sign-off.
## Evolution Of Process
@@ -83,25 +83,25 @@ Let's look at an example of how that can happen in a step-wise way.
![Step 1: clients `C` need `A` to do their jobs, incurring Complexity Risk.](/img/generated/risks/process/step1.png)
-1. As the above diagram shows, there exists a group of people inside a company `C`, which need a certain something `A` in order to get their jobs done. Because they are organising, providing and creating `A` to do their jobs, they are responsible for all the [Complexity Risk](Complexity-Risk.md) of `A`.
+1. As the above diagram shows, there exists a group of people inside a company `C`, which need a certain something `A` in order to get their jobs done. Because they are organising, providing and creating `A` to do their jobs, they are responsible for all the [Complexity Risk](/tags/Complexity-Risk) of `A`.
![Step 2: team `B` doing `A` for clients `C`. Complexity Risk is transferred to B, but C pick up Staff Risk.](/img/generated/risks/process/step2.png)
-2. Because `A` is risky, a new team (`B`) is spun up to deal with the [Complexity Risk](Complexity-Risk.md), which might let `C` get on with their "proper" jobs. As shown in the diagram above, this is really useful: `C`'s is job much easier (reduced [Complexity Risk](Complexity-Risk.md)) as they have an easier path to `A` than before. But the risk for `A` hasn't really gone - they're now just dependent on `B` instead. When members of `B` fail to deliver, this is [Staff Risk](Scarcity-Risk.md#staff-risk) for `C`.
+2. Because `A` is risky, a new team (`B`) is spun up to deal with the [Complexity Risk](/tags/Complexity-Risk), which might let `C` get on with their "proper" jobs. As shown in the diagram above, this is really useful: `C`'s is job much easier (reduced [Complexity Risk](/tags/Complexity-Risk)) as they have an easier path to `A` than before. But the risk for `A` hasn't really gone - they're now just dependent on `B` instead. When members of `B` fail to deliver, this is [Staff Risk](Scarcity-Risk.md#staff-risk) for `C`.
![Step 3: team `B` formalises the dependency with a Process](/img/generated/risks/process/step3.png)
3. Problems are likely to occur eventually in the `B`/`C` relationship. Perhaps some members of the `B` team give better service than others, or deal with more variety in requests? In order to standardise the response from `B` and also to reduce scope-creep in requests from `C`, `B` organises bureaucratically so that there is a controlled process (`P`) by which `A` can be accessed. Members of teams `B` and `C` now interact via some request mechanism like forms (or another protocol).
- - As shown in the above diagram, because of `P`, `B` can now process requests on a first-come-first-served basis and deal with them all in the same way: the more unusual requests from `C` might not fit the model. These [Process Risks](Process-Risk.md) are now the problem of the form-filler in `C`.
- - Since this is [Abstraction](../thinking/Glossary.md#abstraction), `C` now has [Invisibility Risk](Communication-Risk.md#invisibility-risk) since it can't access team `B` and see how it works.
- - Team `B` may also use `P` to introduce other bureaucracy like authorisation and sign-off steps or payment barriers. All of this increases [Process Risk](Process-Risk.md) for team C.
+ - As shown in the above diagram, because of `P`, `B` can now process requests on a first-come-first-served basis and deal with them all in the same way: the more unusual requests from `C` might not fit the model. These [Process Risks](/tags/Process-Risk) are now the problem of the form-filler in `C`.
+ - Since this is [Abstraction](/thinking/Glossary.md#abstraction), `C` now has [Invisibility Risk](/tags/Invisibility-Risk) since it can't access team `B` and see how it works.
+ - Team `B` may also use `P` to introduce other bureaucracy like authorisation and sign-off steps or payment barriers. All of this increases [Process Risk](/tags/Process-Risk) for team C.
![Person D acts as a middleman for customers needing some variant of `A`, transferring Complexity Risk](/img/generated/risks/process/step4.png)
4. Teams like `B` can sometimes end up in "Monopoly" positions within a business. This means that clients like `C` are forced to deal with whatever process `B` wishes to enforce. Although they are unable to affect process `P`, `C` still have risks they want to transfer.
- - In the above diagram, Person `D`, who has experience working with team `B` acts as a middleman for some of `C`, requiring some variant of `A` . They are able to help navigate the bureaucracy (handle with [Process Risk](Process-Risk.md)).
+ - In the above diagram, Person `D`, who has experience working with team `B` acts as a middleman for some of `C`, requiring some variant of `A` . They are able to help navigate the bureaucracy (handle with [Process Risk](/tags/Process-Risk)).
- The cycle potentially starts again: will `D` end up becoming a new team, with a new process?
In this example, you can see how the organisation evolves process to mitigate risk around the use (and misuse) of `A`. This is an example of _Process following Strategy_:
@@ -111,31 +111,31 @@ In this example, you can see how the organisation evolves process to mitigate ri
Two key take-aways from this:
- **The System Gets More Complex**: with different teams working to mitigate different risks in different ways, we end up with a more complex situation than when we started. Although we've _evolved_ in this direction by mitigating risks, it's not necessarily the case that the end result is _more efficient_. In fact, as we will see in [Map-And-Territory Risk](Map-And-Territory-Risk.md#markets), this evolution can lead to some very inadequate (but nonetheless stable) systems.
- - **Organisational process evolves to mitigate risk**: just as we've shown that [actions are about mitigating risk](../thinking/Start.md), we've now seen that these actions get taken in an evolutionary way. That is, there is "pressure" on our internal processes to reduce risk. The people maintaining these processes feel the risk, and modify their processes in response. Let's look at a real-life example:
+ - **Organisational process evolves to mitigate risk**: just as we've shown that [actions are about mitigating risk](/thinking/Start.md), we've now seen that these actions get taken in an evolutionary way. That is, there is "pressure" on our internal processes to reduce risk. The people maintaining these processes feel the risk, and modify their processes in response. Let's look at a real-life example:
## An Example - Release Processes
Over the years I have worked in the Finance Industry it's given me time to observe how, across an entire industry, process can evolve both in response to regulatory pressure, organisational maturity and mitigating risk:
1. Initially, I could release software by logging onto the production accounts with a shared password that everyone knew, and deploy software or change data in the database.
-2. The first issue with this is [Agency Risk from bad actors](Agency-Risk.md): how could you know that the numbers weren't being altered in the databases? _Production Auditing_ was introduced so that at least you could tell what was being changed and when, in order to point the blame later.
-3. But there was still plenty of scope for deliberate or accidental [Dead-End Risk](Complexity-Risk.md#dead-end-risk) damage. Next, passwords were taken out of the hands of developers and you needed approval to "break glass" to get onto production.
-4. The increasing complexity (and therefore [Complexity Risk](Complexity-Risk.md)) in production environments meant that sometimes changes collided with each other, or were performed at inopportune times. Change Requests were introduced. This is an approval process which asks you to describe what you want to change in production, and why you want to change it.
+2. The first issue with this is [Agency Risk from bad actors](/tags/Agency-Risk): how could you know that the numbers weren't being altered in the databases? _Production Auditing_ was introduced so that at least you could tell what was being changed and when, in order to point the blame later.
+3. But there was still plenty of scope for deliberate or accidental [Dead-End Risk](/tags/Dead-End-Risk) damage. Next, passwords were taken out of the hands of developers and you needed approval to "break glass" to get onto production.
+4. The increasing complexity (and therefore [Complexity Risk](/tags/Complexity-Risk)) in production environments meant that sometimes changes collided with each other, or were performed at inopportune times. Change Requests were introduced. This is an approval process which asks you to describe what you want to change in production, and why you want to change it.
5. The change request software is generally awful, making the job of raising change requests tedious and time-consuming. Therefore, developers would _automate_ the processes for release, sometimes including the process to write the change request. This allowed them to improve release cadence at the expense of owning more code.
6. Auditors didn't like the fact that this automation existed, because effectively, that meant that developers could get access to production with the press of a button, taking you back to step 1...
## Bureaucracy
-Where we've talked about process evolution above, the actors involved have been acting in good faith: they are working to mitigate risk in the organisation. The [Process Risk](Process-Risk.md) that accretes along the way is an _unintended consequence_: There is no guarantee that the process that arises will be humane and intuitive. Many organisational processes end up being baroque or Kafka-esque, forcing unintuitive behaviour on their users. This is partly because process design is _hard_ and it's difficult to anticipate all the various ways a process will be used ahead-of-time.
+Where we've talked about process evolution above, the actors involved have been acting in good faith: they are working to mitigate risk in the organisation. The [Process Risk](/tags/Process-Risk) that accretes along the way is an _unintended consequence_: There is no guarantee that the process that arises will be humane and intuitive. Many organisational processes end up being baroque or Kafka-esque, forcing unintuitive behaviour on their users. This is partly because process design is _hard_ and it's difficult to anticipate all the various ways a process will be used ahead-of-time.
-But [Parkinson's Law](https://en.wikipedia.org/wiki/Parkinsons_law) takes this one step further: the human actors shaping the organisation will abuse their positions of power in order to further their own careers (this is [Agency Risk](Agency-Risk.md), which we will come to in a future section):
+But [Parkinson's Law](https://en.wikipedia.org/wiki/Parkinsons_law) takes this one step further: the human actors shaping the organisation will abuse their positions of power in order to further their own careers (this is [Agency Risk](/tags/Agency-Risk), which we will come to in a future section):
> "Parkinson's law is the adage that "work expands so as to fill the time available for its completion". It is sometimes applied to the growth of bureaucracy in an organisation... He explains this growth by two forces: (1) 'An official wants to multiply subordinates, not rivals' and (2) 'Officials make work for each other.'" - [Parkinson's Law, _Wikipedia_](https://en.wikipedia.org/wiki/Parkinsons_law)
-This implies that there is a tendency for organisations to end up with _needless levels of [Process Risk](Process-Risk.md)_.
+This implies that there is a tendency for organisations to end up with _needless levels of [Process Risk](/tags/Process-Risk)_.
To fix this, design needs to happen at a higher level. In our code, we would [Refactor](Complexity-Risk.md#technical-debt) these processes to remove the unwanted complexity. In a business, it requires re-organisation at a higher level to redefine the boundaries and responsibilities between the teams.
-Next in the tour of [Dependency Risks](Dependency-Risk.md), it's time to look at [Boundary Risk](Boundary-Risk.md).
+Next in the tour of [Dependency Risks](/tags/Dependency-Risk), it's time to look at [Boundary Risk](/tags/Boundary-Risk).
diff --git a/docs/risks/Dependency-Risks/Reliability-Risk.md b/docs/risks/Dependency-Risks/Reliability-Risk.md
index 106fb4068..ec53c0109 100644
--- a/docs/risks/Dependency-Risks/Reliability-Risk.md
+++ b/docs/risks/Dependency-Risks/Reliability-Risk.md
@@ -27,7 +27,7 @@ This points to the problem that when we use an external dependency, we are at th
It's easy to think about reliability for something like a bus: sometimes, it's late due to weather, or cancelled due to driver sickness, or the route changes unexpectedly due to road works.
-In software, it's no different: _unreliability_ is the flip-side of [Feature Implementation Risk](Feature-Risk.md#implementation-risk). It's caused in the gap between the real behaviour of the software and the expectations for it.
+In software, it's no different: _unreliability_ is the flip-side of [Feature Implementation Risk](/tags/Implementation-Risk). It's caused in the gap between the real behaviour of the software and the expectations for it.
There is an upper bound on the reliability of the software you write, and this is based on the dependencies you use and (in turn) the reliability of those dependencies:
@@ -41,4 +41,4 @@ This kind of stuff is encapsulated in the science of [Reliability Engineering](h
This was applied on NASA missions, and then in the 1970's to car design following the [Ford Pinto exploding car](https://en.wikipedia.org/wiki/Ford_Pinto#Design_flaws_and_ensuing_lawsuits) affair. But establishing the reliability of software dependencies like this would be _hard_ and _expensive_. We are more likely to mitigate [Reliability Risk](#reliability-risk) in software using _testing_, _redundancy_ and _reserves_, as shown in the diagram above.
-Additionally, we often rely on _proxies for reliability_. We'll look at these proxies (and the way in which software projects signal their reliability) in much more detail in the section on [Software Dependency Risk](Software-Dependency-Risk.md).
\ No newline at end of file
+Additionally, we often rely on _proxies for reliability_. We'll look at these proxies (and the way in which software projects signal their reliability) in much more detail in the section on [Software Dependency Risk](/tags/Software-Dependency-Risk).
\ No newline at end of file
diff --git a/docs/risks/Dependency-Risks/Scarcity-Risks/Funding-Risk.md b/docs/risks/Dependency-Risks/Scarcity-Risks/Funding-Risk.md
index ae3ccf219..f0a178c05 100644
--- a/docs/risks/Dependency-Risks/Scarcity-Risks/Funding-Risk.md
+++ b/docs/risks/Dependency-Risks/Scarcity-Risks/Funding-Risk.md
@@ -17,10 +17,10 @@ part_of: Scarcity Risk
![Funding Risk](/img/generated/risks/scarcity/funding-risk.png)
-On a lot of software projects you are "handed down" deadlines from above and told to deliver by a certain date or face the consequences. But sometimes you're given a budget instead, which really just adds another layer of abstraction to the [Schedule Risk](Scarcity-Risk.md#schedule-risk). That is, do I have enough funds to cover the team for as long as I need them?
+On a lot of software projects you are "handed down" deadlines from above and told to deliver by a certain date or face the consequences. But sometimes you're given a budget instead, which really just adds another layer of abstraction to the [Schedule Risk](/tags/Schedule-Risk). That is, do I have enough funds to cover the team for as long as I need them?
This grants you some leeway as now you have two variables to play with: the _size_ of the team, and _how long_ you can run it for. The larger the team, the shorter the time you can afford to pay for it.
In startup circles, this "amount of time you can afford it" is called the ["Runway"](https://en.wiktionary.org/wiki/runway): you have to get the product to "take-off" (become profitable) before the runway ends.
-Startups often spend a lot of time courting investors in order to get funding and mitigate this type of [Schedule Risk](Scarcity-Risk.md#schedule-risk). But, as shown in the diagram above, this activity usually comes at the expense of [Opportunity Risk](Scarcity-Risk.md#opportunity-risk) and [Feature Risk](Feature-Risk.md), as usually the same people are diverted into raise funds instead of building the project itself.
\ No newline at end of file
+Startups often spend a lot of time courting investors in order to get funding and mitigate this type of [Schedule Risk](/tags/Schedule-Risk). But, as shown in the diagram above, this activity usually comes at the expense of [Opportunity Risk](/tags/Opportunity-Risk) and [Feature Risk](/tags/Feature-Risk), as usually the same people are diverted into raise funds instead of building the project itself.
\ No newline at end of file
diff --git a/docs/risks/Dependency-Risks/Scarcity-Risks/Mitigations.md b/docs/risks/Dependency-Risks/Scarcity-Risks/Mitigations.md
index b37b87b94..614cfcb1c 100644
--- a/docs/risks/Dependency-Risks/Scarcity-Risks/Mitigations.md
+++ b/docs/risks/Dependency-Risks/Scarcity-Risks/Mitigations.md
@@ -1,20 +1,20 @@
## Mitigations
-Here are a selection of mitigations for [Scarcity Risk](Scarcity-Risk.md) in general:
+Here are a selection of mitigations for [Scarcity Risk](/tags/Scarcity-Risk) in general:
- **Buffers**: smoothing out peaks and troughs in utilisation.
- **Reservation Systems**: giving clients information _ahead_ of the dependency usage about whether the resource will be available to them.
- **Graceful degradation**: ensuring _some_ service in the event of over-subscription. It would be no use allowing people to cram onto the bus until it can't move.
- - **Demand Management**: having different prices during busy periods helps to reduce demand. Having "first class" seats means that higher-paying clients can get service even when the train is full. [Uber](https://www.uber.com) adjust prices in real-time by so-called [Surge Pricing](https://www.uber.com/en-GB/drive/partner-app/how-surge-works/). This is basically turning [Scarcity Risk](Scarcity-Risk.md) into a [Market Risk](Feature-Risk.md#market-risk) problem.
+ - **Demand Management**: having different prices during busy periods helps to reduce demand. Having "first class" seats means that higher-paying clients can get service even when the train is full. [Uber](https://www.uber.com) adjust prices in real-time by so-called [Surge Pricing](https://www.uber.com/en-GB/drive/partner-app/how-surge-works/). This is basically turning [Scarcity Risk](/tags/Scarcity-Risk) into a [Market Risk](/tags/Market-Risk) problem.
- **Queues**: these provide a "fair" way of dealing with scarcity by exposing some mechanism for prioritising use of the resource. Buses operate a first-come-first-served system, whereas emergency departments in hospitals triage according to need.
- **Pools**: reserving parts of a resource for a group of customers, and sharing within that group.
- **Horizontal Scaling**: allowing a scarce resource to flexibly scale according to how much demand there is. (For example, putting on extra buses when the trains are on strike, or opening extra check-outs at the supermarket.)
-Much like [Reliability Risk](Dependency-Risk.md#reliability-risk), there is science for it:
+Much like [Reliability Risk](/tags/Reliability-Risk), there is science for it:
- **[Queue Theory](https://en.wikipedia.org/wiki/Queueing_theory)** is all about building mathematical models of buffers, queues, pools and so forth.
- **[Logistics](https://en.wikipedia.org/wiki/Logistics)** is the practical organisation of the flows of materials and goods around things like [Supply Chains](https://en.wikipedia.org/wiki/Supply_chain),
- and **[Project Management](https://en.wikipedia.org/wiki/Project_management)** is in large part about ensuring the right resources are available at the right times.
-In this section, we've looked at various risks to do with scarcity of time, as a quantity we can spend like money. But frequently, we have a dependency on a specific _event_. On to [Deadline Risk](Deadline-Risk.md).
\ No newline at end of file
+In this section, we've looked at various risks to do with scarcity of time, as a quantity we can spend like money. But frequently, we have a dependency on a specific _event_. On to [Deadline Risk](/tags/Deadline-Risk).
\ No newline at end of file
diff --git a/docs/risks/Dependency-Risks/Scarcity-Risks/Red-Queen-Risk.md b/docs/risks/Dependency-Risks/Scarcity-Risks/Red-Queen-Risk.md
index b29d738ae..731998294 100644
--- a/docs/risks/Dependency-Risks/Scarcity-Risks/Red-Queen-Risk.md
+++ b/docs/risks/Dependency-Risks/Scarcity-Risks/Red-Queen-Risk.md
@@ -15,7 +15,7 @@ part_of: Scarcity Risk
-A more specific formulation of [Schedule Risk](Scarcity-Risk.md#schedule-risk) is [Red Queen Risk](Scarcity-Risk.md#red-queen-risk), which is that whatever you build at the start of the project will go slowly more-and-more out of date as the project goes on.
+A more specific formulation of [Schedule Risk](/tags/Schedule-Risk) is [Red Queen Risk](/tags/Red-Queen-Risk), which is that whatever you build at the start of the project will go slowly more-and-more out of date as the project goes on.
This is named after the Red Queen quote from Alice in Wonderland:
@@ -29,6 +29,6 @@ Now, they didn't _deliberately_ take 15 years to build this game (lots of things
![Red Queen Risk](/img/generated/risks/scarcity/red-queen-risk.png)
-Personally, I have suffered the pain on project teams where we've had to cope with legacy code and databases because the cost of changing them was too high. This is shown in the above diagram: mitigating [Red Queen Risk](#red-queen-risk) (by _keeping up-to-date_) has the [Attendant Risk](../thinking/Glossary.md#attendant-risk) of costing time and money, which might not seem worth it. Any team who is stuck using [Visual Basic 6.0](https://en.wikipedia.org/wiki/Visual_Basic) is here.
+Personally, I have suffered the pain on project teams where we've had to cope with legacy code and databases because the cost of changing them was too high. This is shown in the above diagram: mitigating [Red Queen Risk](#red-queen-risk) (by _keeping up-to-date_) has the [Attendant Risk](/thinking/Glossary.md#attendant-risk) of costing time and money, which might not seem worth it. Any team who is stuck using [Visual Basic 6.0](https://en.wikipedia.org/wiki/Visual_Basic) is here.
-It's possible to ignore [Red Queen Risk](Scarcity-Risk.md#red-queen-risk) for a time, but this is just another form of [Technical Debt](Complexity-Risk.md) which eventually comes due.
+It's possible to ignore [Red Queen Risk](/tags/Red-Queen-Risk) for a time, but this is just another form of [Technical Debt](/tags/Complexity-Risk) which eventually comes due.
diff --git a/docs/risks/Dependency-Risks/Scarcity-Risks/Scarcity-Risk.md b/docs/risks/Dependency-Risks/Scarcity-Risks/Scarcity-Risk.md
index bcd562928..047ed12b0 100644
--- a/docs/risks/Dependency-Risks/Scarcity-Risks/Scarcity-Risk.md
+++ b/docs/risks/Dependency-Risks/Scarcity-Risks/Scarcity-Risk.md
@@ -2,6 +2,7 @@
title: Scarcity Risk
description: Scarcity Risk is about quantities of a dependency, and specifically, not having enough.
+slug: /risks/Scarcity-Risk
featured:
class: c
@@ -16,9 +17,9 @@ part_of: Dependency Risk
-While [Reliability Risk](Dependency-Risk.md#reliability-risk) (which we met in the previous section) considers what happens when a _single dependency_ is unreliable, scarcity is about _quantities_ of a dependency, and specifically, _not having enough_.
+While [Reliability Risk](/tags/Reliability-Risk) (which we met in the previous section) considers what happens when a _single dependency_ is unreliable, scarcity is about _quantities_ of a dependency, and specifically, _not having enough_.
-In the previous section we talked about the _reliability_ of the bus: it will either arrive or it wont. But what if, when it arrives, it's already full of passengers? There is a _scarcity of seats_: you don't much care which seat you get on the bus, you just need one. Let's term this [Scarcity Risk](Scarcity-Risk.md), _the risk of not being able to access a dependency in a timely fashion due to its scarcity_.
+In the previous section we talked about the _reliability_ of the bus: it will either arrive or it wont. But what if, when it arrives, it's already full of passengers? There is a _scarcity of seats_: you don't much care which seat you get on the bus, you just need one. Let's term this [Scarcity Risk](/tags/Scarcity-Risk), _the risk of not being able to access a dependency in a timely fashion due to its scarcity_.
Any resource (such as disk space, oxygen, concert tickets, time or pizza) that you depend on can suffer from _scarcity_, and here, we're going to look at five particular types, relevant to software.
diff --git a/docs/risks/Dependency-Risks/Scarcity-Risks/Schedule-Risk.md b/docs/risks/Dependency-Risks/Scarcity-Risks/Schedule-Risk.md
index 473fcae88..45e011740 100644
--- a/docs/risks/Dependency-Risks/Scarcity-Risks/Schedule-Risk.md
+++ b/docs/risks/Dependency-Risks/Scarcity-Risks/Schedule-Risk.md
@@ -16,15 +16,15 @@ part_of: Scarcity Risk
-[Schedule Risk](Scarcity-Risk.md#schedule-risk) is very pervasive, and really underlies _everything_ we do. People _want_ things, but they _want them at a certain time_. We need to eat and drink every day, for example. We might value having a great meal, but not if we have to wait three weeks for it.
+[Schedule Risk](/tags/Schedule-Risk) is very pervasive, and really underlies _everything_ we do. People _want_ things, but they _want them at a certain time_. We need to eat and drink every day, for example. We might value having a great meal, but not if we have to wait three weeks for it.
And let's go completely philosophical for a second: were you to attain immortality, you'd probably not feel the need to buy _anything_. You'd clearly have no _needs_. Anything you wanted, you could create yourself within your infinite time-budget. _Rocks don't need money_, after all.
-In the section on [Feature Risk](Feature-Risk.md) we looked at [Market Risk](Feature-Risk.md), the idea that the value of your product is itself at risk from the whims of the market, share prices being the obvious example of that effect. In Finance, we measure this using _price_, and we can put together probability models based on how much _money_ you might make or lose.
+In the section on [Feature Risk](/tags/Feature-Risk) we looked at [Market Risk](/tags/Feature-Risk), the idea that the value of your product is itself at risk from the whims of the market, share prices being the obvious example of that effect. In Finance, we measure this using _price_, and we can put together probability models based on how much _money_ you might make or lose.
-With [Schedule Risk](Scarcity-Risk.md#schedule-risk), the underlying measure is _time_:
+With [Schedule Risk](/tags/Schedule-Risk), the underlying measure is _time_:
- - "If I implement feature X, I'm picking up something like 5 days of [Schedule Risk](Scarcity-Risk.md#schedule-risk)."
- - "If John goes travelling that's going to hit us with lots of [Schedule Risk](Scarcity-Risk.md#schedule-risk) while we train up Anne."
+ - "If I implement feature X, I'm picking up something like 5 days of [Schedule Risk](/tags/Schedule-Risk)."
+ - "If John goes travelling that's going to hit us with lots of [Schedule Risk](/tags/Schedule-Risk) while we train up Anne."
-... and so on. Clearly, in the same way as you don't know exactly how much money you might lose or gain on the stock-exchange, you can't put precise numbers on [Schedule Risk](Scarcity-Risk.md#schedule-risk) either.
+... and so on. Clearly, in the same way as you don't know exactly how much money you might lose or gain on the stock-exchange, you can't put precise numbers on [Schedule Risk](/tags/Schedule-Risk) either.
diff --git a/docs/risks/Dependency-Risks/Scarcity-Risks/Staff-Risk.md b/docs/risks/Dependency-Risks/Scarcity-Risks/Staff-Risk.md
index 568beeffd..d878abfbf 100644
--- a/docs/risks/Dependency-Risks/Scarcity-Risks/Staff-Risk.md
+++ b/docs/risks/Dependency-Risks/Scarcity-Risks/Staff-Risk.md
@@ -23,7 +23,7 @@ Since staff are a scarce resource, it stands to reason that if a startup has a "
You need to consider how long your staff are going to be around, especially if you have [Key Person Risk](https://en.wikipedia.org/wiki/Key_person_insurance#Key_person_definition) on some of them. People like to have new challenges, move on to live in new places, or simply get bored. Replacing staff can be highly risky.
-The longer your project goes on for, the more [Staff Risk](Scarcity-Risk.md#staff-risk) you will have to endure, and you can't rely on getting the [best staff for failing projects](Agency-Risk.md).
+The longer your project goes on for, the more [Staff Risk](Scarcity-Risk.md#staff-risk) you will have to endure, and you can't rely on getting the [best staff for failing projects](/tags/Agency-Risk).
### Student Syndrome
@@ -31,8 +31,8 @@ The longer your project goes on for, the more [Staff Risk](Scarcity-Risk.md#staf
> "Student syndrome refers to planned procrastination, when, for example, a student will only start to apply themselves to an assignment at the last possible moment before its deadline." - _[Wikipedia](https://en.wikipedia.org/wiki/Student_syndrome)_
-Arguably, there is good psychological, evolutionary and risk-based reasoning behind procrastination: if there is apparently a lot of time to get a job done, then [Schedule Risk](Scarcity-Risk.md#schedule-risk) is low. If we're only ever mitigating our _biggest risks_, then managing [Schedule Risk](Scarcity-Risk.md#schedule-risk) in the future doesn't matter so much. Putting efforts into mitigating future risks that _might not arise_ is wasted effort.
+Arguably, there is good psychological, evolutionary and risk-based reasoning behind procrastination: if there is apparently a lot of time to get a job done, then [Schedule Risk](/tags/Schedule-Risk) is low. If we're only ever mitigating our _biggest risks_, then managing [Schedule Risk](/tags/Schedule-Risk) in the future doesn't matter so much. Putting efforts into mitigating future risks that _might not arise_ is wasted effort.
-Or at least, that's the argument: if you're [Discounting the Future To Zero](../thinking/Evaluating-Risk.md) then you'll be pulling all-nighters in order to deliver any assignment.
+Or at least, that's the argument: if you're [Discounting the Future To Zero](/thinking/Evaluating-Risk.md) then you'll be pulling all-nighters in order to deliver any assignment.
-So, the problem with [Student Syndrome](#student-syndrome) is that the _very mitigation_ for [Schedule Risk](Scarcity-Risk.md#schedule-risk) (allowing more time) is an [Attendant Risk](../thinking/Glossary.md#attendant-risk) that _causes_ [Schedule Risk](Scarcity-Risk.md#schedule-risk): you'll work within the more generous time allocation more slowly and you'll end up revealing [Hidden Risk](../thinking/Glossary.md#hidden-risk) _later_. And, discovering these hidden risks later causes you to end up being late because of them.
+So, the problem with [Student Syndrome](#student-syndrome) is that the _very mitigation_ for [Schedule Risk](/tags/Schedule-Risk) (allowing more time) is an [Attendant Risk](/thinking/Glossary.md#attendant-risk) that _causes_ [Schedule Risk](/tags/Schedule-Risk): you'll work within the more generous time allocation more slowly and you'll end up revealing [Hidden Risk](/thinking/Glossary.md#hidden-risk) _later_. And, discovering these hidden risks later causes you to end up being late because of them.
diff --git a/docs/risks/Dependency-Risks/Software-Dependency-Risk.md b/docs/risks/Dependency-Risks/Software-Dependency-Risk.md
index a47ea1793..222891c0a 100644
--- a/docs/risks/Dependency-Risks/Software-Dependency-Risk.md
+++ b/docs/risks/Dependency-Risks/Software-Dependency-Risk.md
@@ -16,7 +16,7 @@ part_of: Dependency Risk
-In this section, we're going to look specifically at _Software_ dependencies, although many of the concerns we'll raise here apply equally to all the other types of dependency we outlined in [Dependency Risk](Dependency-Risk.md).
+In this section, we're going to look specifically at _Software_ dependencies, although many of the concerns we'll raise here apply equally to all the other types of dependency we outlined in [Dependency Risk](/tags/Dependency-Risk).
![Software Dependency Risk](/img/generated/risks/software-dependency/software-dependency-risk.png)
@@ -30,27 +30,27 @@ In this section we will look at:
## Software Dependencies as Features
-[Software Dependencies](Software-Dependency-Risk.md) allows us to construct dependency networks to give us all kinds of features and mitigate all kinds of risk. That is, the features we are looking for in a dependency _are to mitigate some kind of risk_.
+[Software Dependencies](/tags/Software-Dependency-Risk) allows us to construct dependency networks to give us all kinds of features and mitigate all kinds of risk. That is, the features we are looking for in a dependency _are to mitigate some kind of risk_.
-For example, I might start using [WhatsApp](https://en.wikipedia.org/wiki/WhatsApp) because I want to be able to send my friends photos and text messages. However, it's likely that those same features allow us to mitigate [Coordination Risk](Coordination-Risk.md) when we're next trying to meet up.
+For example, I might start using [WhatsApp](https://en.wikipedia.org/wiki/WhatsApp) because I want to be able to send my friends photos and text messages. However, it's likely that those same features allow us to mitigate [Coordination Risk](/tags/Coordination-Risk) when we're next trying to meet up.
Let's look at some more examples:
|Risk |Software Mitigating That Risk |
|-----------------------------------------------------|------------------------------------------------------------------------- |
-|[Coordination Risk](Coordination-Risk.md) |Calendar tools, Bug Tracking, Distributed Databases |
-|[Schedule-Risk](Scarcity-Risk.md#schedule-risk) |Planning Software, Project Management Software |
-|[Communication-Risk](Communication-Risk.md) |Email, Chat tools, CRM tools like SalesForce, Forums, Twitter, Protocols |
-|[Process-Risk](Process-Risk.md) |Reporting tools, online forms, process tracking tools |
-|[Agency-Risk](Agency-Risk.md) |Auditing tools, transaction logs, Time-Sheet software, HR Software |
-|[Operational-Risk](Operational-Risk.md) |Support tools like ZenDesk, Grafana, InfluxDB, Geneos, Security Tools |
-|[Feature-Risk](Feature-Risk.md) |Every piece of software you use! |
+|[Coordination Risk](/tags/Coordination-Risk) |Calendar tools, Bug Tracking, Distributed Databases |
+|[Schedule-Risk](/tags/Schedule-Risk) |Planning Software, Project Management Software |
+|[Communication-Risk](/tags/Communication-Risk) |Email, Chat tools, CRM tools like SalesForce, Forums, Twitter, Protocols |
+|[Process-Risk](/tags/Process-Risk) |Reporting tools, online forms, process tracking tools |
+|[Agency-Risk](/tags/Agency-Risk) |Auditing tools, transaction logs, Time-Sheet software, HR Software |
+|[Operational-Risk](/tags/Operational-Risk) |Support tools like ZenDesk, Grafana, InfluxDB, Geneos, Security Tools |
+|[Feature-Risk](/tags/Feature-Risk) |Every piece of software you use! |
-With this in mind, we can see that adding a software dependency is a trade-off: we reduce some risk (as in the table above), but in return we pick up [Software Dependency Risk](Software-Dependency-Risk.md) as a result. Whether this trade-off is worth it depends entirely on how well that software dependency resolves the original risk and how onerous the new risks are that we pick up.
+With this in mind, we can see that adding a software dependency is a trade-off: we reduce some risk (as in the table above), but in return we pick up [Software Dependency Risk](/tags/Software-Dependency-Risk) as a result. Whether this trade-off is worth it depends entirely on how well that software dependency resolves the original risk and how onerous the new risks are that we pick up.
## Programming Languages as Dependencies
-In the earlier section on [Complexity Risk](Complexity-Risk.md) we tackled [Kolmogorov Complexity](Complexity-Risk.md#kolmogorov-complexity), and the idea that your codebase had some kind of minimal level of complexity based on the output it was trying to create. This is a neat idea, but in a way, we cheated. Let's look at how.
+In the earlier section on [Complexity Risk](/tags/Complexity-Risk) we tackled [Kolmogorov Complexity](Complexity-Risk.md#kolmogorov-complexity), and the idea that your codebase had some kind of minimal level of complexity based on the output it was trying to create. This is a neat idea, but in a way, we cheated. Let's look at how.
We were trying to figure out the shortest (Javascript) program to generate this output:
@@ -95,8 +95,8 @@ function out() { (7 symbols)
1. **Language Matters**: the Kolmogorov complexity is dependent on the language, and the features the language has built in.
2. **Exact Kolmogorov complexity is uncomputable anyway:** Since it's the _theoretical_ minimum program length, it's a fairly abstract idea, so we shouldn't get too hung up on this. There is no function to be able to say, "What's the Kolmogorov complexity of string X?"
-3. **What is this new library function we've created?** Is `abcdRepeater` going to be part of _every_ Javascript? If so, then we've shifted [Codebase Risk](Complexity-Risk.md) away from ourselves, but we've pushed [Conceptual Integrity Risk](Feature-Risk.md#conceptual-integrity-risk) onto every _other_ user of Javascript, because `abcdRepeater` will be clogging up the JavaScript documentation for everyone, despite being rarely useful.
-4. **Are there equivalent functions for every single other string?** If so, then compilation is no longer a tractable problem because now we have a massive library of different `XXXRepeater` functions to compile against. So, what we _lose_ in [Codebase Risk](Complexity-Risk.md#codebase-risk) we gain in [Complexity Risk](Complexity-Risk.md#space-and-time-complexity).
+3. **What is this new library function we've created?** Is `abcdRepeater` going to be part of _every_ Javascript? If so, then we've shifted [Codebase Risk](/tags/Complexity-Risk) away from ourselves, but we've pushed [Conceptual Integrity Risk](/tags/Conceptual-Integrity-Risk) onto every _other_ user of Javascript, because `abcdRepeater` will be clogging up the JavaScript documentation for everyone, despite being rarely useful.
+4. **Are there equivalent functions for every single other string?** If so, then compilation is no longer a tractable problem because now we have a massive library of different `XXXRepeater` functions to compile against. So, what we _lose_ in [Codebase Risk](/tags/Codebase-Risk) we gain in [Complexity Risk](Complexity-Risk.md#space-and-time-complexity).
5. **Language design, then, is about _ergonomics_:** x After you have passed the relatively low bar of providing [Turing Completeness](https://en.wikipedia.org/wiki/Turing_completeness), the key is to provide _useful_ features that enable problems to be solved, without over-burdening the user with features they _don't_ need. And in fact, all software is about this.
6. **Language Ecosystems _really_ matter**: all modern languages allow extensions via libraries, modules or plugins. If your particular `abcdRepeater` isn't in the main library,
@@ -112,18 +112,18 @@ But outside, the form is simple, and designed for humans to use. This is _[erg
![Software Dependency Ergonomics: adopting simple dependencies](/img/generated/risks/software-dependency/ergonomics1.png)
-The _interface_ of a tool is the part we touch and interact with, via its protocol. If you adopt _simple_ dependencies (as in the diagram above) you don't accrue [Communication Risk](Communication-Risk.md), but you might have to orchestrate _more_ dependencies, picking up [Complexity Risk](Complexity-Risk.md) in your software.
+The _interface_ of a tool is the part we touch and interact with, via its protocol. If you adopt _simple_ dependencies (as in the diagram above) you don't accrue [Communication Risk](/tags/Communication-Risk), but you might have to orchestrate _more_ dependencies, picking up [Complexity Risk](/tags/Complexity-Risk) in your software.
The interface of a dependency expands when you ask it to do a wider variety of things. An easy-to-use drill does one thing well: it turns drill-bits at useful levels of torque for drilling holes and sinking screws. But if you wanted it to also operate as a lathe, a sander or a strimmer (all basically mechanical things going round) you would have to sacrifice the conceptual integrity for a more complex protocol, probably including adapters, extensions, handles and so on.
![Software Dependency Ergonomics: adopting complex dependencies](/img/generated/risks/software-dependency/ergonomics2.png)
-Adopting complex software dependencies (as shown in the diagram above) might allow you to avoid complexity in your own codebase. However, this likely gives you a longer learning curve before you understand the tool, and you _might_ run into issues later where the tool fails to do something critical that you wanted (a [Dead End Risk](Complexity-Risk.md#dead-end-risk)).
+Adopting complex software dependencies (as shown in the diagram above) might allow you to avoid complexity in your own codebase. However, this likely gives you a longer learning curve before you understand the tool, and you _might_ run into issues later where the tool fails to do something critical that you wanted (a [Dead End Risk](/tags/Dead-End-Risk)).
Using a software dependency allows us to split a project's complexity into two:
- The inner complexity of the dependency (how it works internally, its own [internal complexity](Complexity-Risk.md#kolmogorov-complexity)).
- - The complexity of the instructions that we need to write to make the tool work, [the protocol complexity](Communication-Risk.md#protocol-risk), which will be a function of the complexity of the tool itself.
+ - The complexity of the instructions that we need to write to make the tool work, [the protocol complexity](/tags/Protocol-Risk), which will be a function of the complexity of the tool itself.
![Types of Complexity For a Software Dependency](/img/generated/risks/software-dependency/protocol-complexity.png)
@@ -131,7 +131,7 @@ As the above diagram shows, the bulk of the complexity of a software tool is hid
### Designing Protocols
-Software is not constrained by _physical_ ergonomics in the same way as a tool is. But ideally, it should have conceptual ergonomics: complexity is hidden away from the user behind the _User Interface_. This is the familiar concept of [Abstraction](../thinking/Glossary.md#abstraction) we've already looked at. As we saw in [Communication Risk](Communication-Risk.md#learning-curve-risk), when we use a new protocol, we face [Learning Curve Risk](Communication-Risk.md#learning-curve-risk).
+Software is not constrained by _physical_ ergonomics in the same way as a tool is. But ideally, it should have conceptual ergonomics: complexity is hidden away from the user behind the _User Interface_. This is the familiar concept of [Abstraction](/thinking/Glossary.md#abstraction) we've already looked at. As we saw in [Communication Risk](/tags/Learning-Curve-Risk), when we use a new protocol, we face [Learning Curve Risk](/tags/Learning-Curve-Risk).
To minimise this, we should apply the [Principal Of Least Astonishment](https://en.wikipedia.org/wiki/Principle_of_least_astonishment) when designing our own protocols:
@@ -153,7 +153,7 @@ All 3 approaches involve a different risk-profile. Let's look at each in turn,
Way before the Internet, this was the only game in town. Tool support was very thin-on-the-ground. Algorithms could be distributed as code snippets _in books and magazines_ which could be transcribed and run, and added to your program. This spirit lives on somewhat in StackOverflow and JSFiddle, where you are expected to "adopt" others' code into your own project. Code-your-own is still the best option if you have highly bespoke requirements, or are dealing with unusual environmental contexts.
-One of the hidden risks of embarking on a code-your-own approach is that the features you need are _not_ apparent from the outset. What might appear to be a trivial implementation of some piece of functionality can often turn into its own industry as more and more hidden [Feature Risk](Feature-Risk.md) is uncovered. For example, as we discussed in our earlier treatment of [Dead-End Risk](Complexity-Risk.md#dead-end-risk), building log-in screens _seemed like a good idea_. However, this gets out-of-hand fast when you need:
+One of the hidden risks of embarking on a code-your-own approach is that the features you need are _not_ apparent from the outset. What might appear to be a trivial implementation of some piece of functionality can often turn into its own industry as more and more hidden [Feature Risk](/tags/Feature-Risk) is uncovered. For example, as we discussed in our earlier treatment of [Dead-End Risk](/tags/Dead-End-Risk), building log-in screens _seemed like a good idea_. However, this gets out-of-hand fast when you need:
- A password reset screen
- To email the reset links to the user
@@ -166,17 +166,17 @@ One of the hidden risks of embarking on a code-your-own approach is that the fea
### Unwritten Software
-Sometimes you will pick up [Dependency Risk](Dependency-Risk.md) from _unwritten software_. This commonly happens when work is divided amongst team members, or teams.
+Sometimes you will pick up [Dependency Risk](/tags/Dependency-Risk) from _unwritten software_. This commonly happens when work is divided amongst team members, or teams.
![Sometimes, a module you're writing will depend on unwritten code](/img/generated/risks/software-dependency/unwritten.png)
If a component **A** of our project _depends_ on **B** for some kind of processing, you might not be able to complete **A** before writing **B**. This makes _scheduling_ the project harder, and if component **A** is a risky part of the project, then the chances are you'll want to mitigate risk there first.
-But it also hugely increases [Communication Risk](Communication-Risk.md) because now you're being asked to communicate with a dependency that doesn't really exist yet, _let alone_ have any documentation.
+But it also hugely increases [Communication Risk](/tags/Communication-Risk) because now you're being asked to communicate with a dependency that doesn't really exist yet, _let alone_ have any documentation.
There are a couple of ways of doing this:
- - **Standards**: if component **B** is a database, a queue, mail gateway or something else with a standard interface, then you're in luck. Write **A** to those standards, and find a cheap, simple implementation to test with. This gives you time to sort out exactly what implementation of **B** you're going for. This is not a great long-term solution, because obviously, you're not using the _real_ dependency- you might get surprised when the behaviour of the real component is subtly different. But it can reduce [Schedule Risk](Scarcity-Risk.md#schedule-risk) in the short-term.
+ - **Standards**: if component **B** is a database, a queue, mail gateway or something else with a standard interface, then you're in luck. Write **A** to those standards, and find a cheap, simple implementation to test with. This gives you time to sort out exactly what implementation of **B** you're going for. This is not a great long-term solution, because obviously, you're not using the _real_ dependency- you might get surprised when the behaviour of the real component is subtly different. But it can reduce [Schedule Risk](/tags/Schedule-Risk) in the short-term.
- **Coding To Interfaces**: if standards aren't an option, but the surface area of **B** that **A** uses is quite small and obvious, you can write a small interface for it, and work behind that, using a [Mock](https://en.wikipedia.org/wiki/Mock_object) for **B** while you're waiting for finished component. Write the interface to cover only what **A** _needs_, rather than everything that **B** _does_ in order to minimise the risk of [Leaky Abstractions](https://en.wikipedia.org/wiki/Leaky_abstraction).
@@ -184,7 +184,7 @@ There are a couple of ways of doing this:
### Conway's Law
-Due to channel bandwidth limitations, if the dependency is being written by another person, another team or in another country, [Communication Risk](Communication-Risk.md) piles up. When this happens, you will want to minimise the interface complexity _as much as possible_, since the more complex the interface, the worse the [Communication Risk](Communication-Risk.md) will be. The tendency then is to make the interfaces between teams or people _as simple as possible_, modularising along these organisational boundaries.
+Due to channel bandwidth limitations, if the dependency is being written by another person, another team or in another country, [Communication Risk](/tags/Communication-Risk) piles up. When this happens, you will want to minimise the interface complexity _as much as possible_, since the more complex the interface, the worse the [Communication Risk](/tags/Communication-Risk) will be. The tendency then is to make the interfaces between teams or people _as simple as possible_, modularising along these organisational boundaries.
In essence, this is Conway's Law:
@@ -192,12 +192,12 @@ In essence, this is Conway's Law:
### 2. Software Libraries
-By choosing a particular software library, we are making a move on the [Risk Landscape](Risk-Landscape.md) in the hope of moving to a place with more favourable risks. Typically, using library code offers a [Schedule Risk](Scarcity-Risk.md#schedule-risk) and [Complexity Risk](Complexity-Risk.md) [Silver Bullet](../complexity/Silver-Bullets.md) - a high-speed route over the risk landscape to somewhere nearer where we want to be. But, in return we expect to pick up:
+By choosing a particular software library, we are making a move on the [Risk Landscape](Risk-Landscape.md) in the hope of moving to a place with more favourable risks. Typically, using library code offers a [Schedule Risk](/tags/Schedule-Risk) and [Complexity Risk](/tags/Complexity-Risk) [Silver Bullet](/complexity/Silver-Bullets.md) - a high-speed route over the risk landscape to somewhere nearer where we want to be. But, in return we expect to pick up:
-- **[Communication Risk](Communication-Risk.md)**: because we now have to learn how to communicate with this new dependency.
-- **[Boundary Risk](Boundary-Risk.md)**: - because now are limited to using the functionality provided by this dependency. We have chosen it over alternatives and changing to something else would be more work and therefore costly.
+- **[Communication Risk](/tags/Communication-Risk)**: because we now have to learn how to communicate with this new dependency.
+- **[Boundary Risk](/tags/Boundary-Risk)**: - because now are limited to using the functionality provided by this dependency. We have chosen it over alternatives and changing to something else would be more work and therefore costly.
-But, it's quite possible that we could wind up in a worse place than we started out, by using a library that's out-of-date, riddled with bugs or badly supported. i.e. full of new, hidden [Feature Risk](Feature-Risk.md).
+But, it's quite possible that we could wind up in a worse place than we started out, by using a library that's out-of-date, riddled with bugs or badly supported. i.e. full of new, hidden [Feature Risk](/tags/Feature-Risk).
It's _really easy_ to make bad decisions about which tools to use because the tools don't (generally) advertise their deficiencies. After all, they don't generally know how _you_ will want to use them.
@@ -215,9 +215,9 @@ But, leaving that aside, let's try to build a model of what this decision making
In the table above, I am summarising three different sources (linked at the end of the section), which give descriptions of which factors to look for when choosing open-source libraries. Here are some take-aways:
- - **[Feature Risk](Feature-Risk.md) is a big concern**: How can you be sure that the project will do what you want it to do ahead of schedule? Will it contain bugs or missing features? By looking at factors like _release frequency_ and _size of the community_ you get a good feel for this which is difficult to fake.
- - **[Boundary Risk](Boundary-Risk.md) is also very important**: You are going to have to _live_ with your choices for the duration of the project, so it's worth spending the effort to either ensure that you're not going to regret the decision, or that you can change direction later.
- - **Third is [Communication Risk](Communication-Risk.md)**: how well does the project deal with its users? If a project is "famous", then it has communicated its usefulness to a wide, appreciative audience. Avoiding [Communication Risk](Communication-Risk.md) is also a good reason to pick _tools you are already familiar with_.
+ - **[Feature Risk](/tags/Feature-Risk) is a big concern**: How can you be sure that the project will do what you want it to do ahead of schedule? Will it contain bugs or missing features? By looking at factors like _release frequency_ and _size of the community_ you get a good feel for this which is difficult to fake.
+ - **[Boundary Risk](/tags/Boundary-Risk) is also very important**: You are going to have to _live_ with your choices for the duration of the project, so it's worth spending the effort to either ensure that you're not going to regret the decision, or that you can change direction later.
+ - **Third is [Communication Risk](/tags/Communication-Risk)**: how well does the project deal with its users? If a project is "famous", then it has communicated its usefulness to a wide, appreciative audience. Avoiding [Communication Risk](/tags/Communication-Risk) is also a good reason to pick _tools you are already familiar with_.
![Software Libraries Risk Tradeoff](/img/generated/risks/software-dependency/library.png)
@@ -229,21 +229,21 @@ In the table above, I am summarising three different sources (linked at the end
### Complexity Risk?
-One thing that none of the sources in the table consider (at least from the outset) is the [Complexity Risk](Complexity-Risk.md) of using a solution:
+One thing that none of the sources in the table consider (at least from the outset) is the [Complexity Risk](/tags/Complexity-Risk) of using a solution:
- Does it drag in lots of extra dependencies that seem unnecessary for the job in hand? If so, you could end up in [Dependency Hell](https://en.wikipedia.org/wiki/Dependency_hell), with multiple, conflicting versions of libraries in the project.
- Do you already have a dependency providing this functionality? So many times, I've worked on projects that import a _new_ dependency when some existing (perhaps transitive) dependency has _already brought in the functionality_. For example, there are plenty of libraries for [JSON](https://en.wikipedia.org/wiki/JSON) marshalling, but if I'm also using a web framework the chances are it already has a dependency on one already.
- Does it contain lots of functionality that isn’t relevant to the task you want it to accomplish? e.g. Using Java when a shell script would do (on a non-Java project)
-Sometimes, the amount of complexity _goes up_ when you use a dependency for _good reason._ For example, in Java you can use [Java Database Connectivity (JDBC)](https://en.wikipedia.org/wiki/Java_Database_Connectivity) to interface with various types of database. [Spring Framework](https://en.wikipedia.org/wiki/Spring_Framework) (a popular Java library) provides a thing called a `JDBCTemplate`. This actually makes your code _more_ complex, and can prove very difficult to debug. However, it prevents some security issues, handles resource disposal and makes database access more efficient. None of those are essential to interfacing with the database, but not having them is [Operational Risk](Operational-Risk.md) that can bite you later on.
+Sometimes, the amount of complexity _goes up_ when you use a dependency for _good reason._ For example, in Java you can use [Java Database Connectivity (JDBC)](https://en.wikipedia.org/wiki/Java_Database_Connectivity) to interface with various types of database. [Spring Framework](https://en.wikipedia.org/wiki/Spring_Framework) (a popular Java library) provides a thing called a `JDBCTemplate`. This actually makes your code _more_ complex, and can prove very difficult to debug. However, it prevents some security issues, handles resource disposal and makes database access more efficient. None of those are essential to interfacing with the database, but not having them is [Operational Risk](/tags/Operational-Risk) that can bite you later on.
### 3. Software-as-a-Service
Businesses opt for Software-as-a-Service (SaaS) because:
-- It promises to vastly reduce the [Complexity Risk](Complexity-Risk.md) they face in their organisations. e.g. managing the software or making changes to it.
-- Payment is usually based on _usage_, mitigating [Funding Risk](Scarcity-Risk.md#funding-risk). e.g. Instead of having to pay up-front for a license and hire in-house software administrators, they can leave this function to the experts.
-- Potentially, you out-source the [Operational Risk](Operational-Risk.md) to a third party. e.g. ensuring availability, making sure data is secure and so on.
+- It promises to vastly reduce the [Complexity Risk](/tags/Complexity-Risk) they face in their organisations. e.g. managing the software or making changes to it.
+- Payment is usually based on _usage_, mitigating [Funding Risk](/tags/Funding-Risk). e.g. Instead of having to pay up-front for a license and hire in-house software administrators, they can leave this function to the experts.
+- Potentially, you out-source the [Operational Risk](/tags/Operational-Risk) to a third party. e.g. ensuring availability, making sure data is secure and so on.
SaaS is now a very convenient way to provide _commercial_ software. Popular examples of SaaS might be [SalesForce](https://en.wikipedia.org/wiki/Salesforce.com), or [GMail](https://en.wikipedia.org/wiki/Gmail). Both of which follow the commonly-used [Freemium](https://en.wikipedia.org/wiki/Freemium) model, where the basic service is provided free but upgrading to a paid account gives extra benefits.
@@ -251,9 +251,9 @@ SaaS is now a very convenient way to provide _commercial_ software. Popular ex
The diagram above summarises the risks raised in some of the available literature (sources below). Some take-aways:
-- Clearly, [Operational Risk](Operational-Risk.md) is now a big concern. By depending on a third-party organisation you are tying yourself to its success or failure in a much bigger way than just by using a piece of open-source software. What happens to data security, both in the data centre and over the Internet? Although you might choose a SaaS solution to mitigate _internal_ [Operational Risk](Operational-Risk.md), you might just be "throwing it over the wall" to a third party, who might do a worse job.
-- With [Feature Risk](Feature-Risk.md) you now have to contend with the fact that the software will be upgraded _outside your control_, and you may have limited control over which features get added or changed.
-- [Boundary Risk](Boundary-Risk.md) is also a different proposition: you are tied to the software provider by _a contract_. If the service changes in the future, or isn't to your liking, you can't simply fork the code (like you could with an open source project).
+- Clearly, [Operational Risk](/tags/Operational-Risk) is now a big concern. By depending on a third-party organisation you are tying yourself to its success or failure in a much bigger way than just by using a piece of open-source software. What happens to data security, both in the data centre and over the Internet? Although you might choose a SaaS solution to mitigate _internal_ [Operational Risk](/tags/Operational-Risk), you might just be "throwing it over the wall" to a third party, who might do a worse job.
+- With [Feature Risk](/tags/Feature-Risk) you now have to contend with the fact that the software will be upgraded _outside your control_, and you may have limited control over which features get added or changed.
+- [Boundary Risk](/tags/Boundary-Risk) is also a different proposition: you are tied to the software provider by _a contract_. If the service changes in the future, or isn't to your liking, you can't simply fork the code (like you could with an open source project).
![Risk Tradeoff From Using Software as a Service (SaaS)](/img/generated/risks/software-dependency/saas.png)
@@ -276,8 +276,8 @@ Let's expand this view slightly and look at where different pieces of software s
![Software Dependencies, Pricing, Delivery Matrix Risk Profiles](/img/generated/risks/software-dependency/software_dependency_table_3_sideways.png)
-- Where there is value in **the [Network Effect](https://en.wikipedia.org/wiki/Network_effect)** it's often a sign that the software will be free, or open source: programming languages and Linux are the obvious examples of this. Bugs are easier to find when there are lots of eyes looking, and learning the skill to use the software has less [Boundary Risk](Boundary-Risk.md) if you know you'll be able to use it at any point in the future.
-- At the other end of the spectrum, clients will happily pay for software if it clearly **reduces [Operational Risk](Operational-Risk.md)**. Take [Amazon Web Services (AWS)](https://en.wikipedia.org/wiki/Amazon_Web_Services). The essential trade here is that you substitute the complexity of hosting and maintaining various pieces of hardware, in exchange for metered payments ([Funding Risk](Scarcity-Risk.md#funding-risk) for you). Since the AWS _interfaces_ are specific to Amazon, there is significant [Boundary Risk](Boundary-Risk.md) in choosing this option.
+- Where there is value in **the [Network Effect](https://en.wikipedia.org/wiki/Network_effect)** it's often a sign that the software will be free, or open source: programming languages and Linux are the obvious examples of this. Bugs are easier to find when there are lots of eyes looking, and learning the skill to use the software has less [Boundary Risk](/tags/Boundary-Risk) if you know you'll be able to use it at any point in the future.
+- At the other end of the spectrum, clients will happily pay for software if it clearly **reduces [Operational Risk](/tags/Operational-Risk)**. Take [Amazon Web Services (AWS)](https://en.wikipedia.org/wiki/Amazon_Web_Services). The essential trade here is that you substitute the complexity of hosting and maintaining various pieces of hardware, in exchange for metered payments ([Funding Risk](/tags/Funding-Risk) for you). Since the AWS _interfaces_ are specific to Amazon, there is significant [Boundary Risk](/tags/Boundary-Risk) in choosing this option.
- In the middle there are lots of **substitute options** and therefore high competition. Because of this prices are pushed towards zero and therefore often advertising is used to monetise the product. [Angry Birds](https://en.wikipedia.org/wiki/Angry_Birds) is a classic example: initially, it had demo and paid versions, however [Rovio](https://en.wikipedia.org/wiki/Rovio_Entertainment) discovered there was much more money to be made through advertising than from the [paid-for app](https://www.deconstructoroffun.com/blog/2017/6/11/how-angry-birds-2-multiplied-quadrupled-revenue-in-a-year).
## Choice
@@ -288,4 +288,4 @@ Choosing dependencies can be extremely difficult. As we discussed above, the us
Having chosen a dependency, whether or not you end up in a more favourable position risk-wise is going to depend heavily on the quality of the execution and the skill of the implementor. With software dependencies we often have to live with the decisions we make for a long time: _choosing_ the software dependency is far easier than _changing it later_.
-Let's take a closer look at this problem in the section on [Boundary Risk](Boundary-Risk.md). But first, lets looks at [processes](Process-Risk.md).
+Let's take a closer look at this problem in the section on [Boundary Risk](/tags/Boundary-Risk). But first, lets looks at [processes](/tags/Process-Risk).
diff --git a/docs/risks/Feature-Risks/Analysis.md b/docs/risks/Feature-Risks/Analysis.md
index 86f6f2647..88385ac10 100644
--- a/docs/risks/Feature-Risks/Analysis.md
+++ b/docs/risks/Feature-Risks/Analysis.md
@@ -10,24 +10,24 @@ sidebar_label: Analysis
## Fashion
-Fashion plays a big part in IT. By being _fashionable_, web-sites are communicating: _this is a new thing_, _this is relevant_, _this is not terrible_. All of which is mitigating a [Communication Risk](Communication-Risk.md). Users are all-too-aware that the Internet is awash with terrible, abandon-ware sites that are going to waste their time.
+Fashion plays a big part in IT. By being _fashionable_, web-sites are communicating: _this is a new thing_, _this is relevant_, _this is not terrible_. All of which is mitigating a [Communication Risk](/tags/Communication-Risk). Users are all-too-aware that the Internet is awash with terrible, abandon-ware sites that are going to waste their time.
How can you communicate that you're not one of them to your users?
## Delight
-If this breakdown of [Feature Risk](Feature-Risk.md) seems reductive, then try not to think of it that way: the aim _of course_ should be to delight users, and turn them into fans.
+If this breakdown of [Feature Risk](/tags/Feature-Risk) seems reductive, then try not to think of it that way: the aim _of course_ should be to delight users, and turn them into fans.
-Consider [Feature Risk](Feature-Risk.md) from both the down-side and the up-side:
+Consider [Feature Risk](/tags/Feature-Risk) from both the down-side and the up-side:
- What are we missing?
- How can we be _even better_?
## Analysis
-So far in this section, we've simply seen a bunch of different types of [Feature Risk](Feature-Risk.md). But we're going to be relying heavily on [Feature Risk](Feature-Risk.md) as we go on in order to build our understanding of other risks, so it's probably worth spending a bit of time up front to classify what we've found.
+So far in this section, we've simply seen a bunch of different types of [Feature Risk](/tags/Feature-Risk). But we're going to be relying heavily on [Feature Risk](/tags/Feature-Risk) as we go on in order to build our understanding of other risks, so it's probably worth spending a bit of time up front to classify what we've found.
-The [Feature Risks](Feature-Risk.md) identified here basically exist in a space with at least 3 dimensions:
+The [Feature Risks](/tags/Feature-Risk) identified here basically exist in a space with at least 3 dimensions:
- **Fit**: how well the features fit for a particular client.
- **Audience**: the range of clients (the _market_) that may be able to use this feature.
@@ -55,7 +55,7 @@ In the [Staging And Classifying](Staging-And-Classifying.md) section we'll come
### Fit and Audience
-Two risks, [Feature Access Risk](Feature-Risk.md#feature-access-risk) and [Market Risk](Feature-Risk.md#market-risk), consider _fit_ for a whole _audience_ of users. They are different: just as it's possible to have a small audience, but a large revenue, it's possible to have a product which has low [Feature Access Risk](#feature-access-risk) (i.e lots of users can access it without difficulty) but high [Market Risk](#market-risk) (i.e. the market is highly volatile or capricious in it's demands). _Online services_ often suffer from this [Market Risk](#market-risk) roller-coaster, being one moment highly valued and the next irrelevant.
+Two risks, [Feature Access Risk](/tags/Feature-Access-Risk) and [Market Risk](/tags/Market-Risk), consider _fit_ for a whole _audience_ of users. They are different: just as it's possible to have a small audience, but a large revenue, it's possible to have a product which has low [Feature Access Risk](#feature-access-risk) (i.e lots of users can access it without difficulty) but high [Market Risk](#market-risk) (i.e. the market is highly volatile or capricious in it's demands). _Online services_ often suffer from this [Market Risk](#market-risk) roller-coaster, being one moment highly valued and the next irrelevant.
- **Market Risk** is therefore risk to _income_ from the market changing.
- **Feature Access Risk** is risk to _audience_ changing.
diff --git a/docs/risks/Feature-Risks/Application.md b/docs/risks/Feature-Risks/Application.md
index cefdef30f..92b6198d4 100644
--- a/docs/risks/Feature-Risks/Application.md
+++ b/docs/risks/Feature-Risks/Application.md
@@ -10,11 +10,11 @@ sidebar_label: Application
Next time you are grooming the backlog, why not apply this:
- - Can you judge which tasks mitigate the most [Feature Risk](Feature-Risk.md)?
+ - Can you judge which tasks mitigate the most [Feature Risk](/tags/Feature-Risk)?
- Are you delivering features that are valuable across a large audience? Or of less value across a wider audience?
- - How does writing a specification mitigate [Fit Risk](Feature-Risk.md#feature-fit-risk)? For what other reasons are you writing specifications?
+ - How does writing a specification mitigate [Fit Risk](/tags/Feature-Fit-Risk)? For what other reasons are you writing specifications?
- Does the audience _know_ that the features exist? How do you communicate feature availability to them?
In the next section, we are going to unpack this last point further. Somewhere between "what the customer wants" and "what you give them" is a _dialogue_. In using a software product, users are engaging in a _dialogue_ with its features. If the features don't exist, hopefully they will engage in a dialogue with the development team to get them added.
-These dialogues are prone to risk and this is the subject of the next section, [Communication Risk](Communication-Risk.md).
\ No newline at end of file
+These dialogues are prone to risk and this is the subject of the next section, [Communication Risk](/tags/Communication-Risk).
\ No newline at end of file
diff --git a/docs/risks/Feature-Risks/Conceptual-Integrity-Risk.md b/docs/risks/Feature-Risks/Conceptual-Integrity-Risk.md
index ade8762cf..8a7f678a1 100644
--- a/docs/risks/Feature-Risks/Conceptual-Integrity-Risk.md
+++ b/docs/risks/Feature-Risks/Conceptual-Integrity-Risk.md
@@ -15,12 +15,12 @@ part_of: Feature Risk
![Conceptual Integrity Risk](/img/generated/risks/feature/conceptual-integrity-risk.png)
-[Conceptual Integrity Risk](Feature-Risk.md#conceptual-integrity-risk) is the risk that chasing after features leaves the product making no sense, and therefore pleasing no-one.
+[Conceptual Integrity Risk](/tags/Conceptual-Integrity-Risk) is the risk that chasing after features leaves the product making no sense, and therefore pleasing no-one.
Sometimes users _swear blind_ that they need some feature or other, but it runs at odds with the design of the system, and plain _doesn't make sense_. Sometimes the development team can spot this kind of conceptual failure as soon as the suggestion is made, but usually it's in coding that this becomes apparent.
Sometimes it can go for a lot longer. Here's an example: I once worked on some software that was built as a score-board within a chat application. However, after we'd added much-asked-for commenting and reply features to our score-board, we realised we'd implemented a chat application _within a chat application_, and had wasted our time enormously.
-[Feature Phones](https://en.wikipedia.org/wiki/Feature_phone) are another example: although it _seemed_ like the market wanted more and more features added to their phones, [Apple's iPhone](https://en.wikipedia.org/wiki/IPhone) was able to steal huge market share by presenting a much more enjoyable, more coherent user experience, despite being more expensive and having _fewer_ features. Feature Phones had been drowning in increasing [Conceptual Integrity Risk](Feature-Risk.md#conceptual-integrity-risk) without realising it.
+[Feature Phones](https://en.wikipedia.org/wiki/Feature_phone) are another example: although it _seemed_ like the market wanted more and more features added to their phones, [Apple's iPhone](https://en.wikipedia.org/wiki/IPhone) was able to steal huge market share by presenting a much more enjoyable, more coherent user experience, despite being more expensive and having _fewer_ features. Feature Phones had been drowning in increasing [Conceptual Integrity Risk](/tags/Conceptual-Integrity-Risk) without realising it.
-Conceptual Integrity Risk is a particularly pernicious kind of [Feature Risk](Feature-Risk.md) which can only be mitigated by good design and [feedback](../thinking/Cadence.md). Human needs are [fractal in nature](../estimating/Fractals.md): the more you examine them, the more complexity you can find. The aim of a product is to capture some needs at a *general* level: you can't hope to anticipate everything. As with the other risks, there is an inherent [Schedule Risk](Scarcity-Risk.md#schedule-risk) as addressing these risks takes _time_.
\ No newline at end of file
+Conceptual Integrity Risk is a particularly pernicious kind of [Feature Risk](/tags/Feature-Risk) which can only be mitigated by good design and [feedback](/thinking/Cadence.md). Human needs are [fractal in nature](../estimating/Fractals.md): the more you examine them, the more complexity you can find. The aim of a product is to capture some needs at a *general* level: you can't hope to anticipate everything. As with the other risks, there is an inherent [Schedule Risk](/tags/Schedule-Risk) as addressing these risks takes _time_.
\ No newline at end of file
diff --git a/docs/risks/Feature-Risks/Feature-Access-Risk.md b/docs/risks/Feature-Risks/Feature-Access-Risk.md
index 2633b5b75..c49257ec0 100644
--- a/docs/risks/Feature-Risks/Feature-Access-Risk.md
+++ b/docs/risks/Feature-Risks/Feature-Access-Risk.md
@@ -21,6 +21,6 @@ Feature Access Risks are risks due to some clients not having access to some or
Sometimes features can work for some people and not others: this could be down to [Accessibility](https://en.wikipedia.org/wiki/Accessibility) issues, language barriers, localisation or security.
-You could argue that the choice of _platform_ is also going to limit access: writing code for XBox-only leaves PlayStation owners out in the cold. This is _largely_ [Feature Access Risk](Feature-Risk.md#feature-access-risk), though [Dependency Risk](Dependency-Risk.md) is related here.
+You could argue that the choice of _platform_ is also going to limit access: writing code for XBox-only leaves PlayStation owners out in the cold. This is _largely_ [Feature Access Risk](/tags/Feature-Access-Risk), though [Dependency Risk](/tags/Dependency-Risk) is related here.
In marketing terms, minimising [Feature Access Risk](#feature-access-risk) is all about [Segmentation](https://en.wikipedia.org/wiki/Market_segmentation): trying to work out _who_ your product is for and tailoring it to that particular market. As shown in the diagram above, mitigating [Feature Access Risk](#feature-access-risk) means increasing complexity: you have to deliver the software on more platforms, localised in more languages, with different configurations of features. It also means increased development effort.
\ No newline at end of file
diff --git a/docs/risks/Feature-Risks/Feature-Drift-Risk.md b/docs/risks/Feature-Risks/Feature-Drift-Risk.md
index d1c09411f..8757396e0 100644
--- a/docs/risks/Feature-Risks/Feature-Drift-Risk.md
+++ b/docs/risks/Feature-Risks/Feature-Drift-Risk.md
@@ -30,4 +30,4 @@ As shown in the diagram, saving your project from Feature Drift Risk means **fur
Sometimes, the only way to go is to start again with a clean sheet by some **distruptive innovation**.
-[Feature Drift Risk](Feature-Risk.md#feature-drift-risk) is _not the same thing_ as **Requirements Drift**, which is the tendency projects have to expand in scope as they go along. There are lots of reasons they do that, a key one being the [Hidden Risks](../thinking/Glossary.md#hidden-risk) uncovered on the project as it progresses.
+[Feature Drift Risk](/tags/Feature-Drift-Risk) is _not the same thing_ as **Requirements Drift**, which is the tendency projects have to expand in scope as they go along. There are lots of reasons they do that, a key one being the [Hidden Risks](/thinking/Glossary.md#hidden-risk) uncovered on the project as it progresses.
diff --git a/docs/risks/Feature-Risks/Feature-Risk.md b/docs/risks/Feature-Risks/Feature-Risk.md
index 3a10384ba..3846a2399 100644
--- a/docs/risks/Feature-Risks/Feature-Risk.md
+++ b/docs/risks/Feature-Risks/Feature-Risk.md
@@ -13,19 +13,19 @@ tags:
- Feature Risk
part_of: Operational Risk
---
-[Feature Risks](Feature-Risk.md) are types of risks to do with functionality that you need to have in the software you're building.
+[Feature Risks](/tags/Feature-Risk) are types of risks to do with functionality that you need to have in the software you're building.
-[Feature Risk](Feature-Risk.md) is very fundamental: if your project has _no_ [Feature Risk](Feature-Risk.md) it would be perfect! And we all know that _can't happen_.
+[Feature Risk](/tags/Feature-Risk) is very fundamental: if your project has _no_ [Feature Risk](/tags/Feature-Risk) it would be perfect! And we all know that _can't happen_.
![Feature Risk Family](/img/generated/risks/feature/feature-risks.png)
-As a rule of thumb, [Feature Risk](Feature-Risk.md) exists in the gaps between what users _want_, and what they _are given_.
+As a rule of thumb, [Feature Risk](/tags/Feature-Risk) exists in the gaps between what users _want_, and what they _are given_.
-Not considering [Feature Risk](Feature-Risk.md) means that you might be building the wrong functionality, for the wrong audience or at the wrong time. Eventually, this will come down to lost money, business, acclaim, or whatever you are doing your project for. So let's unpack this concept into some of its variations.
+Not considering [Feature Risk](/tags/Feature-Risk) means that you might be building the wrong functionality, for the wrong audience or at the wrong time. Eventually, this will come down to lost money, business, acclaim, or whatever you are doing your project for. So let's unpack this concept into some of its variations.
-As shown in the diagram above, [Feature Risks](Feature-Risk.md) are a family of risks you face any time you start trying to build functionality to serve a client. In this article, we will:
- - Break down and talk about the different [Feature Risks](Feature-Risk.md) shown in the diagram above.
+As shown in the diagram above, [Feature Risks](/tags/Feature-Risk) are a family of risks you face any time you start trying to build functionality to serve a client. In this article, we will:
+ - Break down and talk about the different [Feature Risks](/tags/Feature-Risk) shown in the diagram above.
- Discuss how they occur and what action you can take to address them.
- Analyse the family of feature risks along three axes of _fit_, _audience_ and _change_.
diff --git a/docs/risks/Feature-Risks/Implementation-Risk.md b/docs/risks/Feature-Risks/Implementation-Risk.md
index 88084741e..8b8410dc6 100644
--- a/docs/risks/Feature-Risks/Implementation-Risk.md
+++ b/docs/risks/Feature-Risks/Implementation-Risk.md
@@ -15,8 +15,8 @@ part_of: Feature Risk
![Implementation Risk](/img/generated/risks/feature/feature-implementation-risk.png)
-The [Feature Risk](Feature-Risk.md) family also includes things that don't work as expected, that is to say, [bugs](https://en.wikipedia.org/wiki/Software_bug). Although the distinction between "a missing feature" and "a broken feature" might be worth making in the development team, we can consider these both the same kind of risk: _the software doesn't do what the user expects_. We call these [Implementation Risks](Feature-Risk.md#implementation-risk).
+The [Feature Risk](/tags/Feature-Risk) family also includes things that don't work as expected, that is to say, [bugs](https://en.wikipedia.org/wiki/Software_bug). Although the distinction between "a missing feature" and "a broken feature" might be worth making in the development team, we can consider these both the same kind of risk: _the software doesn't do what the user expects_. We call these [Implementation Risks](/tags/Implementation-Risk).
As shown in the above diagram, we can mitigate this risk with _feedback_ from users, as well as further _development_ and _testing_.
-It's worth pointing out that sometimes, _the user expects the wrong thing_. This is a different but related risk, which could be down to [training](../practices/Training.md), [documentation](../practices/Documentation.md) or simply a [poor user interface](Communication-Risk.md) (and we'll look at that more in [Communication Risk](Communication-Risk.md).)
+It's worth pointing out that sometimes, _the user expects the wrong thing_. This is a different but related risk, which could be down to [training](/tags/Training), [documentation](/tags/Documentation) or simply a [poor user interface](/tags/Communication-Risk) (and we'll look at that more in [Communication Risk](/tags/Communication-Risk).)
diff --git a/docs/risks/Feature-Risks/Regression-Risk.md b/docs/risks/Feature-Risks/Regression-Risk.md
index 31f3287c0..d684eb2b1 100644
--- a/docs/risks/Feature-Risks/Regression-Risk.md
+++ b/docs/risks/Feature-Risks/Regression-Risk.md
@@ -19,8 +19,8 @@ Delivering new features can delight your customers, but breaking existing ones w
[Regression Risk](Feature-Risk.md#regression-risk) is the risk of breaking existing features in your software when you add new ones. As with other feature risks, the eventual result is the same: customers don't have the features they expect.
-Regression Risks increase as your code-base [gains Complexity](Complexity-Risk.md). That's because it becomes impossible to keep a complete [Internal Model](../thinking/Glossary.md#internal-model) of the whole thing in your head, and also your software gains "corner cases" or "edge conditions" which don't get tested very often.
+Regression Risks increase as your code-base [gains Complexity](/tags/Complexity-Risk). That's because it becomes impossible to keep a complete [Internal Model](/thinking/Glossary.md#internal-model) of the whole thing in your head, and also your software gains "corner cases" or "edge conditions" which don't get tested very often.
As shown in the above diagram, you can address Regression Risk with **specification** (defining clearly what the expected behaviour is) and **testing** (both manual and automated), but this takes time and will add extra complexity to your project (either in the form of code for automated tests, written specifications or a more elaborate process for releases).
-Regression Risk is something we'll come back to in [Operational Risk](Operational-Risk.md).
\ No newline at end of file
+Regression Risk is something we'll come back to in [Operational Risk](/tags/Operational-Risk).
\ No newline at end of file
diff --git a/docs/risks/Glossary-Of-Risk-Types.md b/docs/risks/Glossary-Of-Risk-Types.md
index 74ad09027..c54d36e72 100644
--- a/docs/risks/Glossary-Of-Risk-Types.md
+++ b/docs/risks/Glossary-Of-Risk-Types.md
@@ -13,41 +13,4 @@ tweet: yes
# Glossary of Risk Types
-| Risk | Definition |
-|------------------|--------------------------------------------------------------------------|
-|[Boundary](Boundary-Risk.md)|Risks due to the commitments we make around dependencies, and the limitations they place on our ability to change.|
-|[Agency](Agency-Risk.md)|Risks due to the fact that things you depend on have agency, and they have their own goals to pursue.|
-|[Channel](Communication-Risk.md#channel-risk)|Risks due to the inadequacy of the physical channel used to communicate our messages. e.g. noise, loss, interception, corruption.|
-|[Communication](Communication-Risk.md)|Risks due to the difficulty of communicating with other entities, be they people, software, processes etc.|
-|[Codebase](Complexity-Risk.md#codebase-risk)|The specific risks to a project of having a large, complex codebase to manage.|
-|[Complexity](Complexity-Risk.md)|Risks caused by the weight of complexity in the systems we create, and their resistance to change and comprehension.|
-|[Conceptual-integrity](Feature-Risk.md#conceptual-integrity-risk)|Risk that the software you provide is too complex, or doesn't match the expectations of your clients' internal models.|
-|[Coordination](Coordination-Risk.md)|Risks that a group of agents cannot work together in a mutually beneficial way, and their behaviour devolves into competition.|
-|[Dead-End](Complexity-Risk.md#dead-end-risk)|The risk that a particular approach to a change will fail. Caused by the fact that at some level, our internal models are not a complete reflection of reality.|
-|[Deadline](Deadline-Risk.md)|Where the use of a dependency has some kind of deadline, which can be missed.|
-|[Dependency](Dependency-Risk.md)|Risks faced by depending on something else. e.g. an event, process, person, piece of software or an organisation. |
-|[Feature-Access](Feature-Risk.md#feature-access-risk)|Risks due to some clients not having access to some or all of the features in your product.|
-|[Feature-Drift](Feature-Risk.md#feature-drift-risk)|Risk that the features required by clients will change and evolve over time. |
-|[Feature](Feature-Risk.md)|Risks you face when providing features for your clients.|
-|[Feature-Fit](Feature-Risk.md#feature-fit-risk)|Risk that the needs of the client don't coincide with services provided by the supplier.|
-|[Funding](Scarcity-Risk.md#funding-risk)|A particular scarcity risk, due to lack of funding.|
-|[Implementation](Feature-Risk.md#implementation-risk)|Risk that the functionality you are providing doesn't match the features the client is expecting, due to poor or partial implementation.|
-|[Internal-Model](Communication-Risk.md#internal-model-risk)|Risks arising from insufficient or erroneous internal models of reality.|
-|[Invisibility](Communication-Risk.md#invisibility-risk)|Risks caused by the choice of abstractions we use in communication.|
-|[Learning-Curve](Communication-Risk.md#learning-curve-risk)|Risks due to the difficulty faced in updating an internal model.|
-|[Map-And-Territory](Map-And-Territory-Risk.md)|Risks due to the differences between reality and the internal model of reality, and the assumption that they are equivalent. |
-|[Market](Feature-Risk.md#market-risk)|Risk that the value your clients place on the features you supply will change, over time.|
-|[Message](Communication-Risk.md#message-risk)|Risks caused by the difficulty of composing and interpreting messages in the communication process.|
-|[Operational](Operational-Risk.md)|Risks of losses or reputational damage caused by failing processes or real-world events.|
-|[Opportunity](Scarcity-Risk.md#opportunity-risk)|Risk that a particular set of market conditions.|
-|[Process](Process-Risk.md)|Risks due to the fact that when dealing with a dependency, we have to follow a particular protocol of communication, which may not work out the way we want.|
-|[Protocol](Communication-Risk.md#protocol-risk)|Risks due to the failure of encoding or decoding messages between two parties in communication. |
-|[Red-Queen](Scarcity-Risk.md#red-queen-risk)|The general risk that the competitive environment we operate within changes over time.|
-|[Regression](Feature-Risk.md#regression-risk)|Risk that the functionality you provide changes for the worse, over time.|
-|[Reliability](Dependency-Risk.md#reliability-risk)|Risks of not getting benefit from a dependency due to it's reliability.|
-|[Scarcity](Scarcity-Risk.md)|Risk of not being able to access a dependency in a timely fashion due to it's scarcity.|
-|[Schedule](Scarcity-Risk.md#schedule-risk)|The aspect of dependency risk related to time.|
-|[Security](Agency-Risk.md#security)|Agency Risks due to actors from outside the system.|
-|[Software Dependency](Software-Dependency-Risk.md)|Dependency Risk due to software dependencies.|
-|[Staff](Scarcity-Risk.md#staff-risk)|The aspect of dependency risks related to employing people.|
-|[Trust-And-Belief](Communication-Risk.md#trust--belief-risk)|Risk that a party we are communicating with can't be trusted, as it has agency or is unreliable in some other way. |
+
diff --git a/docs/risks/Map-And-Territory-Risk.md b/docs/risks/Map-And-Territory-Risk.md
index 1248103e6..46daeb2a6 100644
--- a/docs/risks/Map-And-Territory-Risk.md
+++ b/docs/risks/Map-And-Territory-Risk.md
@@ -20,13 +20,13 @@ As we discussed in the [Communication Risk](Communication-Risk.md#misinterpretat
For example, Risk-First is about naming _risks_ within software development, so we can discuss and understand them better.
-Our [Internal Models](../thinking/Glossary.md#internal-model) of the world are constructed from these abstractions and their relationships.
+Our [Internal Models](/thinking/Glossary.md#internal-model) of the world are constructed from these abstractions and their relationships.
![Maps and Territories, and Communication happening between them](/img/generated/risks/map-and-territory/communication.png)
-As the diagram above shows, there is a translation going on here: observations about the arrangement of _atoms_ in the world are _communicated_ to our [Internal Models](../thinking/Glossary.md#internal-model) and stored as patterns of _information_ (measured in bits and bytes).
+As the diagram above shows, there is a translation going on here: observations about the arrangement of _atoms_ in the world are _communicated_ to our [Internal Models](/thinking/Glossary.md#internal-model) and stored as patterns of _information_ (measured in bits and bytes).
-We face [Map And Territory Risk](Map-And-Territory-Risk.md) because we base our behaviour on our [Internal Models](../thinking/Glossary.md#internal-model) rather than reality itself. It comes from the expression "Confusing the Map for the Territory", attributed to Alfred Korzybski:
+We face [Map And Territory Risk](/tags/Map-And-Territory-Risk) because we base our behaviour on our [Internal Models](/thinking/Glossary.md#internal-model) rather than reality itself. It comes from the expression "Confusing the Map for the Territory", attributed to Alfred Korzybski:
> "Polish-American scientist and philosopher Alfred Korzybski remarked that "the map is not the territory" and that "the word is not the thing", encapsulating his view that an abstraction derived from something, or a reaction to it, is not the thing itself. Korzybski held that many people _do_ confuse maps with territories, that is, confuse models of reality with reality itself." - [Map-Territory Relation, _Wikipedia_](https://en.wikipedia.org/wiki/Map–territory_relation)
@@ -34,12 +34,12 @@ We face [Map And Territory Risk](Map-And-Territory-Risk.md) because we base our
As the above diagram shows, there are two parts to this risk, which we are going to examine in this section:
- - **The internal model may be insufficient.** This leads to issues along the same axes we introduced in [Feature Risk](Feature-Risk.md) (that is Fitness, Audience and Evolution). We'll look at the examples of SatNavs, Software Metrics and Hype-Cycles along the way to illustrate this.
- - **The assumption that the model is right.** We're going to look at [Map and Territory Risk](Map-And-Territory-Risk.md) within the contexts of **machines**, **people**, **hierarchies** and **markets**.
+ - **The internal model may be insufficient.** This leads to issues along the same axes we introduced in [Feature Risk](/tags/Feature-Risk) (that is Fitness, Audience and Evolution). We'll look at the examples of SatNavs, Software Metrics and Hype-Cycles along the way to illustrate this.
+ - **The assumption that the model is right.** We're going to look at [Map and Territory Risk](/tags/Map-And-Territory-Risk) within the contexts of **machines**, **people**, **hierarchies** and **markets**.
## Fitness
-In the [Feature Risk](Feature-Risk.md) section we looked at ways in which our software project might have risks due to having _inappropriate_ features ([Feature Fit Risk](Feature-Risk.md#feature-fit-risk)), _broken_ features ([Feature Implementation Risk](Feature-Risk.md#implementation-risk)) or _too many of the wrong_ features ([Conceptual Integrity Risk](Feature-Risk.md#conceptual-integrity-risk)). Let's see how these same categories also apply to [Internal Models](../thinking/Glossary.md#internal-model).
+In the [Feature Risk](/tags/Feature-Risk) section we looked at ways in which our software project might have risks due to having _inappropriate_ features ([Feature Fit Risk](/tags/Feature-Fit-Risk)), _broken_ features ([Feature Implementation Risk](/tags/Implementation-Risk)) or _too many of the wrong_ features ([Conceptual Integrity Risk](/tags/Conceptual-Integrity-Risk)). Let's see how these same categories also apply to [Internal Models](/thinking/Glossary.md#internal-model).
### Example: The SatNav
@@ -49,24 +49,24 @@ In the headline above, taken from [the Telegraph newspaper](https://www.telegrap
This wasn't borne of stupidity, but experience: SatNavs are pretty reliable. _So many times_ the SatNav had been right, that the driver stopped _questioning its fallibility_.
-There are two [Map and Territory Risks](Map-And-Territory-Risk.md) here:
+There are two [Map and Territory Risks](/tags/Map-And-Territory-Risk) here:
-- The [Internal Model](../thinking/Glossary.md#internal-model) of the _SatNav_ contained information that was wrong: the track had been marked up as a road, rather than a path.
-- The [Internal Model](../thinking/Glossary.md#internal-model) of the _driver_ was wrong: his abstraction of "the SatNav is always right" turned out to be only _mostly_ accurate.
+- The [Internal Model](/thinking/Glossary.md#internal-model) of the _SatNav_ contained information that was wrong: the track had been marked up as a road, rather than a path.
+- The [Internal Model](/thinking/Glossary.md#internal-model) of the _driver_ was wrong: his abstraction of "the SatNav is always right" turned out to be only _mostly_ accurate.
-You could argue that both the SatNav and the Driver's _[Internal Model](../thinking/Glossary.md#internal-model)_ had bugs in them. That is, they both suffer the [Feature Implementation Risk](Feature-Risk.md#implementation-risk) we saw in the [Feature Risk](Feature-Risk.md) section. If a SatNav has too much of this, you'd end up not trusting it, and getting a new one. With your _personal_ [Internal Model](../thinking/Glossary.md#internal-model), you can't buy a new one, but you may learn to _trust your assumptions less_.
+You could argue that both the SatNav and the Driver's _[Internal Model](/thinking/Glossary.md#internal-model)_ had bugs in them. That is, they both suffer the [Feature Implementation Risk](/tags/Implementation-Risk) we saw in the [Feature Risk](/tags/Feature-Risk) section. If a SatNav has too much of this, you'd end up not trusting it, and getting a new one. With your _personal_ [Internal Model](/thinking/Glossary.md#internal-model), you can't buy a new one, but you may learn to _trust your assumptions less_.
![Some examples of Feature Fit Risks, as manifested in the Internal Model](/img/generated/risks/map-and-territory/map_and_territory_table_1.png)
-The diagram above shows how types of [Feature Fit Risk](Feature-Risk.md) can manifest in the [Internal Model](../thinking/Glossary.md#internal-model).
+The diagram above shows how types of [Feature Fit Risk](/tags/Feature-Risk) can manifest in the [Internal Model](/thinking/Glossary.md#internal-model).
## Audience
-Communication allows us to _share_ information between [Internal Models](../thinking/Glossary.md#internal-model) of a whole audience of people. The [Communication Risk](Communication-Risk.md) and [Coordination Risk](Coordination-Risk.md) sections covered the difficulties inherent in aligning [Internal Models](../thinking/Glossary.md#internal-model) so that they cooperate.
+Communication allows us to _share_ information between [Internal Models](/thinking/Glossary.md#internal-model) of a whole audience of people. The [Communication Risk](/tags/Communication-Risk) and [Coordination Risk](/tags/Coordination-Risk) sections covered the difficulties inherent in aligning [Internal Models](/thinking/Glossary.md#internal-model) so that they cooperate.
![Relative popularity of "Machine Learning" and "Big Data" as search terms on [Google Trends](https://trends.google.com), 2011-2018 ](/img/google-trends.png)
-But how does [Map and Territory Risk](Map-And-Territory-Risk.md) apply across a population of [Internal Models](../thinking/Glossary.md#internal-model)? Can we track the rise-and-fall of _ideas_ like we track stock prices? In effect, this is what [Google Trends](https://trends.google.com) does. In the chart above, we can see the relative popularity of two search terms over time. This is probably as good an indicator as any of the changing popularity of an abstraction within an audience.
+But how does [Map and Territory Risk](/tags/Map-And-Territory-Risk) apply across a population of [Internal Models](/thinking/Glossary.md#internal-model)? Can we track the rise-and-fall of _ideas_ like we track stock prices? In effect, this is what [Google Trends](https://trends.google.com) does. In the chart above, we can see the relative popularity of two search terms over time. This is probably as good an indicator as any of the changing popularity of an abstraction within an audience.
### Example: Map And Territory Risk Drives The Hype Cycle
@@ -78,33 +78,33 @@ Most ideas (and most products) have a slow, hard climb to wide-scale adoption.
The five phases (and the "Hype" itself as the thick black line) are shown in the chart above. We start off at the "Technology Trigger", moving to the "Peak of Inflated Expectations", then to the "Trough of Disillusionment" and finally up the "Slope of Enlightenment" to the "Plateau of Productivity".
-The concept of [Map and Territory Risk](Map-And-Territory-Risk.md) actually helps explain why this curve has the shape it does. To see why, let's consider each line in turn:
+The concept of [Map and Territory Risk](/tags/Map-And-Territory-Risk) actually helps explain why this curve has the shape it does. To see why, let's consider each line in turn:
- The **Awareness** (or enthusiasm for) the idea within the population is the dotted line.
- - The **Knowledge** (or _understanding_) of the idea within the audience (a [Learning Curve](Communication-Risk.md#learning-curve-risk), if you will) is the dashed line. Both of these are modelled with [Cumulative Distribution](https://en.wikipedia.org/wiki/Cumulative_distribution_function#Use_in_statistical_analysis) functions which are often used for modelling the spread of a phenomena (disease, product uptake, idea) within a population. As you would expect, **Knowledge** increases less rapidly than **Awareness**.
+ - The **Knowledge** (or _understanding_) of the idea within the audience (a [Learning Curve](/tags/Learning-Curve-Risk), if you will) is the dashed line. Both of these are modelled with [Cumulative Distribution](https://en.wikipedia.org/wiki/Cumulative_distribution_function#Use_in_statistical_analysis) functions which are often used for modelling the spread of a phenomena (disease, product uptake, idea) within a population. As you would expect, **Knowledge** increases less rapidly than **Awareness**.
- **Map And Territory Risk** is the difference between **Awareness** and **Knowledge**. It's highest point is where the **Awareness** of the idea is farthest from the **Knowledge** of it.
- - **Hype** is calculated here as being the **Awareness** line, subtracting **Map and Territory Risk** from a point lagging behind the current time (since it takes time to appreciate this risk). As the population appreciates more [Map and Territory Risk](Map-And-Territory-Risk.md), **Hype** decreases.
+ - **Hype** is calculated here as being the **Awareness** line, subtracting **Map and Territory Risk** from a point lagging behind the current time (since it takes time to appreciate this risk). As the population appreciates more [Map and Territory Risk](/tags/Map-And-Territory-Risk), **Hype** decreases.
-At the point where the effect of [Map and Territory Risk](Map-And-Territory-Risk.md) is at its greatest we end up in the "Trough of Disillusionment". Eventually, we escape the trough as **Knowledge** and understanding of the idea increases, reducing [Map and Territory Risk](Map-And-Territory-Risk.md).
+At the point where the effect of [Map and Territory Risk](/tags/Map-And-Territory-Risk) is at its greatest we end up in the "Trough of Disillusionment". Eventually, we escape the trough as **Knowledge** and understanding of the idea increases, reducing [Map and Territory Risk](/tags/Map-And-Territory-Risk).
![Hype Cycle 2: more even growth of Awareness and Knowledge means no "Trough of Disillusionment"](/img/numbers/hype2.png)
As you might expect, the "Trough of Disillusionment" exists because the **Awareness** of the idea and the **Knowledge** about it increase at different rates.
-Where the **Awareness** and **Knowledge** grow more evenly together, there is no spike in [Map and Territory Risk](Map-And-Territory-Risk.md) and we don't see the corresponding "Trough of Disillusionment" at all, as shown in the above chart.
+Where the **Awareness** and **Knowledge** grow more evenly together, there is no spike in [Map and Territory Risk](/tags/Map-And-Territory-Risk) and we don't see the corresponding "Trough of Disillusionment" at all, as shown in the above chart.
![Audience Feature Risks, as manifested in the Internal Model](/img/generated/risks/map-and-territory/map_and_territory_table_2.png)
-The diagram above shows how Audience-type [Feature Risks](Feature-Risk.md) can manifest in the Internal Model. (The Hype Cycle model is available in **Numbers** format [here](https://github.com/risk-first/website/blob/master/numbers/RiskMatrix.numbers).)
+The diagram above shows how Audience-type [Feature Risks](/tags/Feature-Risk) can manifest in the Internal Model. (The Hype Cycle model is available in **Numbers** format [here](https://github.com/risk-first/website/blob/master/numbers/RiskMatrix.numbers).)
## Evolution
So concepts and abstractions spread through an audience. But what happens next?
- - People will use and abuse new ideas up to the point when they start breaking down. (We also discussed this as the **Peter Principle** in [Boundary Risk](Boundary-Risk.md).)
+ - People will use and abuse new ideas up to the point when they start breaking down. (We also discussed this as the **Peter Principle** in [Boundary Risk](/tags/Boundary-Risk).)
- At the same time, reality itself _evolves_ in response to the idea: the new idea displaces old ones, behaviour changes, and the idea itself can change.
### Example: Metrics
@@ -133,13 +133,13 @@ But _correlation_ doesn't imply _causation_. The _cause_ might be different:
- User satisfaction and SLOC might both be down to the calibre of the developers.
- Response time and revenue might both be down to clever team planning.
-Metrics are seductive because they simplify reality and are easily communicated. But they _inherently_ contain [Map and Territory Risk](Map-And-Territory-Risk.md): by relying _only_ on the metrics, you're not really _seeing_ the reality.
+Metrics are seductive because they simplify reality and are easily communicated. But they _inherently_ contain [Map and Territory Risk](/tags/Map-And-Territory-Risk): by relying _only_ on the metrics, you're not really _seeing_ the reality.
The devil is in the detail.
### Reality Evolves
-In the same way that [markets evolve to demand more features](Scarcity-Risk.md#red-queen-risk), our behaviour evolves to incorporate new ideas. The more popular an idea is, the more people will modify their behaviour as a result of it, and the more the world will change.
+In the same way that [markets evolve to demand more features](/tags/Red-Queen-Risk), our behaviour evolves to incorporate new ideas. The more popular an idea is, the more people will modify their behaviour as a result of it, and the more the world will change.
In the case of metrics this is where they start being used for more than just indicators but as measures of performance or targets:
@@ -167,17 +167,17 @@ SLOC is not on its own a _bad idea_, but using it as a metric for developer prod
![Evolution Feature Risks, as manifested in the Internal Model](/img/generated/risks/map-and-territory/map_and_territory_table_3.png)
-The diagram above shows how Evolution-type [Feature Risks](Feature-Risk.md) can manifest in the Internal Model.
+The diagram above shows how Evolution-type [Feature Risks](/tags/Feature-Risk) can manifest in the Internal Model.
## Humans and Machines
-In the example of the SatNav, we saw how the _quality_ of [Map and Territory Risk](Map-And-Territory-Risk.md) is different for _people_ and _machines_. Whereas people _should_ be expected show skepticism for new (unlikely) information our databases accept it unquestioningly. _Forgetting_ is an everyday, usually benign part of our human [Internal Model](../thinking/Glossary.md#internal-model), but for software systems it is a production crisis involving 3am calls and backups.
+In the example of the SatNav, we saw how the _quality_ of [Map and Territory Risk](/tags/Map-And-Territory-Risk) is different for _people_ and _machines_. Whereas people _should_ be expected show skepticism for new (unlikely) information our databases accept it unquestioningly. _Forgetting_ is an everyday, usually benign part of our human [Internal Model](/thinking/Glossary.md#internal-model), but for software systems it is a production crisis involving 3am calls and backups.
-For Humans, [Map and Territory Risk](Map-And-Territory-Risk.md) is exacerbated by [cognitive biases](https://en.wikipedia.org/wiki/List_of_cognitive_biases):
+For Humans, [Map and Territory Risk](/tags/Map-And-Territory-Risk) is exacerbated by [cognitive biases](https://en.wikipedia.org/wiki/List_of_cognitive_biases):
> "Cognitive biases are systematic patterns of deviation from norm or rationality in judgement, and are often studied in psychology and behavioural economics." - [Cognitive Bias, _Wikipedia_](https://en.wikipedia.org/wiki/List_of_cognitive_biases)
-There are _lots_ of cognitive biases. But let's just mention some that are relevant to [Map and Territory Risk](Map-And-Territory-Risk.md):
+There are _lots_ of cognitive biases. But let's just mention some that are relevant to [Map and Territory Risk](/tags/Map-And-Territory-Risk):
- **Availability Heuristic**: people overestimate the importance of knowledge they have been exposed to.
- **The Ostrich Effect**: which is where dangerous information is ignored or avoided because of the emotions it will evoke.
@@ -185,7 +185,7 @@ There are _lots_ of cognitive biases. But let's just mention some that are rele
## Hierarchical Organisations
-[Map And Territory Risk](Map-And-Territory-Risk.md) "trickles down" through an organisation. The higher levels have an out-sized ability to pervert the incentives at lower levels because once an organisation begins to pursue a "bullshit objective" the whole company can align to this.
+[Map And Territory Risk](/tags/Map-And-Territory-Risk) "trickles down" through an organisation. The higher levels have an out-sized ability to pervert the incentives at lower levels because once an organisation begins to pursue a "bullshit objective" the whole company can align to this.
[The Huffington Post](https://www.huffingtonpost.com/otto-scharmer/the-fish-rots-from-the-he_b_8208652.html) paints a brilliant picture of how Volkswagen managed to get caught faking their emissions tests. As they point out:
@@ -200,7 +200,7 @@ This article identifies the following process:
## Markets
-We've considered [Map and Territory Risk](Map-And-Territory-Risk.md) for individuals, teams and organisations. [Inadequate Equilibria](https://equilibriabook.com) by Eleizer Yudkovsky, looks at how perverse incentive mechanisms break not just departments, but entire societal systems. He highlights one example involving _academics_ and _grantmakers_ in academia:
+We've considered [Map and Territory Risk](/tags/Map-And-Territory-Risk) for individuals, teams and organisations. [Inadequate Equilibria](https://equilibriabook.com) by Eleizer Yudkovsky, looks at how perverse incentive mechanisms break not just departments, but entire societal systems. He highlights one example involving _academics_ and _grantmakers_ in academia:
- It's not very apparent which academics are more worthy of funding.
- One proxy is what they've published (scientific papers) and where they've published (journals).
@@ -211,8 +211,8 @@ We've considered [Map and Territory Risk](Map-And-Territory-Risk.md) for individ
> "Now consider the system of scientific journals... Some journals are prestigious. So university hiring committees pay the most attention to publications in that journal. So people with the best, most interesting-looking publications try to send them to that journal. So if a university hiring committee paid an equal amount of attention to publications in lower-prestige journals, they’d end up granting tenure to less prestigious people. Thus, the whole system is a stable equilibrium that nobody can unilaterally defy except at cost to themselves." - [Inadequate Equilibria, _Eleizer Yudkovsky_](https://equilibriabook.com/molochs-toolbox/)
-As the book points out, while everyone _persists_ in using an inadequate abstraction, the system is broken. However, [Coordination](Coordination-Risk.md) would be required for everyone to _stop_ doing it this way, which is hard work. (Maps are easier to fix in a top-down hierarchy.)
+As the book points out, while everyone _persists_ in using an inadequate abstraction, the system is broken. However, [Coordination](/tags/Coordination-Risk) would be required for everyone to _stop_ doing it this way, which is hard work. (Maps are easier to fix in a top-down hierarchy.)
Scientific journals are a single example taken from a closely argued book investigating lots of cases of this kind. It's worth taking the time to read a couple of the chapters on this interesting topic. (Like Risk-First it is available to read online).
-As usual, this section forms a grab-bag of examples in a complex topic. But it's time to move on as there is one last stop we have to make on the [Risk Landscape](../thinking/Glossary.md#risk-landscape), and that is to look at [Operational Risk](Operational-Risk.md).
\ No newline at end of file
+As usual, this section forms a grab-bag of examples in a complex topic. But it's time to move on as there is one last stop we have to make on the [Risk Landscape](/thinking/Glossary.md#risk-landscape), and that is to look at [Operational Risk](/tags/Operational-Risk).
\ No newline at end of file
diff --git a/docs/risks/Operational-Risk.md b/docs/risks/Operational-Risk.md
index 5ce747140..d19c392e8 100644
--- a/docs/risks/Operational-Risk.md
+++ b/docs/risks/Operational-Risk.md
@@ -20,35 +20,35 @@ tags:
In this section we're going to start considering the realities of running software systems in the real world.
-There is a lot to this subject, so this section is just a taster: we're going to set the scene by looking at what constitutes an [Operational Risk](Operational-Risk.md), and then look at the related discipline of [Operations Management](#operations-management). Following this background, we'll apply the Risk-First model and have a high-level look at the various mitigations for [Operational Risk](Operational-Risk.md).
+There is a lot to this subject, so this section is just a taster: we're going to set the scene by looking at what constitutes an [Operational Risk](/tags/Operational-Risk), and then look at the related discipline of [Operations Management](#operations-management). Following this background, we'll apply the Risk-First model and have a high-level look at the various mitigations for [Operational Risk](/tags/Operational-Risk).
## Operational Risks
-When building software, it's tempting to take a very narrow view of the dependencies of a system, but [Operational Risks](Operational-Risk.md) are often caused by dependencies we _don't_ consider - i.e. the **Operational Context** within which the system is operating. Here are some examples:
+When building software, it's tempting to take a very narrow view of the dependencies of a system, but [Operational Risks](/tags/Operational-Risk) are often caused by dependencies we _don't_ consider - i.e. the **Operational Context** within which the system is operating. Here are some examples:
- **[Staff Risks](Scarcity-Risk.md#staff-risk)**:
- Freak weather conditions affecting ability of staff to get to work, interrupting the development and support teams.
- Reputational damage caused when staff are rude to the customers.
- - **[Reliability Risks](Dependency-Risk.md#reliability-risk)**:
+ - **[Reliability Risks](/tags/Reliability-Risk)**:
- A data-centre going off-line, causing your customers to lose access.
- A power cut causing backups to fail.
- Not having enough desks for everyone to sit at.
- - **[Process Risks](Process-Risk.md)**:
+ - **[Process Risks](/tags/Process-Risk)**:
- Regulatory change, which means you have to adapt your business model.
- Insufficient controls which means you don't notice when some transactions are failing, leaving you out-of-pocket.
- Data loss because of bugs introduced during an untested release.
- - **[Software Dependency Risk](Software-Dependency-Risk.md)**:
+ - **[Software Dependency Risk](/tags/Software-Dependency-Risk)**:
- Hackers exploit weaknesses in a piece of 3rd party software, bringing your service down.
- - **[Agency Risk](Agency-Risk.md)**:
+ - **[Agency Risk](/tags/Agency-Risk)**:
- Workers going on strike.
- Employees trying to steal from the company (bad actors).
- Other crime, such as hackers stealing data.
-This is a long laundry-list of everything that can go wrong due to operating in "The Real World". Although we've spent a lot of time looking at the varieties of [Dependency Risk](Dependency-Risk.md) on a software project, with [Operational Risk](Operational-Risk.md) we have to consider that these dependencies will fail in any number of unusual ways, and we can't be ready for all of them. Preparing for this comes under the umbrella of [Operations Management](#operations-management).
+This is a long laundry-list of everything that can go wrong due to operating in "The Real World". Although we've spent a lot of time looking at the varieties of [Dependency Risk](/tags/Dependency-Risk) on a software project, with [Operational Risk](/tags/Operational-Risk) we have to consider that these dependencies will fail in any number of unusual ways, and we can't be ready for all of them. Preparing for this comes under the umbrella of [Operations Management](#operations-management).
## Operations Management
@@ -82,7 +82,7 @@ Let's look at each of these actions in turn.
![Control, Monitoring And Detection](/img/generated/risks/operational/monitoring-detection.png)
-Since humans and machines have different areas of expertise, and because [Operational Risks](Operational-Risk.md) are often novel, it's often not optimal to try and automate everything. A good operation will consist of a mix of human and machine actors, each playing to their strengths (see the table below).
+Since humans and machines have different areas of expertise, and because [Operational Risks](/tags/Operational-Risk) are often novel, it's often not optimal to try and automate everything. A good operation will consist of a mix of human and machine actors, each playing to their strengths (see the table below).
The aim is to build a human-machine operational system that is [_Homeostatic_](https://en.wikipedia.org/wiki/Homeostasis). This is the property of living things to try and maintain an equilibrium (for example, body temperature or blood glucose levels), but also applies to systems at any scale. The key to homeostasis is to build systems with feedback loops, even though this leads to more complex systems overall. The diagram above shows some of the actions involved in these kind of feedback loops within IT operations.
@@ -93,15 +93,15 @@ The aim is to build a human-machine operational system that is [_Homeostatic_](h
|Expensive at scale |Cheap at scale |
|Reacting and Anticipating |Recording |
-As we saw in [Map and Territory Risk](Map-And-Territory-Risk.md), it's very easy to fool yourself, especially around [Key Performance Indicators (KPIs)](https://en.wikipedia.org/wiki/Performance_indicator) and metrics. Large organisations have [Audit](https://en.wikipedia.org/wiki/Audit) functions precisely to guard against their own internal failing [processes](Process-Risk.md) and [Agency Risk](Agency-Risk.md). Audits could be around software tools, processes, practices, quality and so on. Practices such as [Continuous Improvement](https://en.wikipedia.org/wiki/Continual_improvement_process) and [Total Quality Management](https://en.wikipedia.org/wiki/Total_quality_management) also figure here.
+As we saw in [Map and Territory Risk](/tags/Map-And-Territory-Risk), it's very easy to fool yourself, especially around [Key Performance Indicators (KPIs)](https://en.wikipedia.org/wiki/Performance_indicator) and metrics. Large organisations have [Audit](https://en.wikipedia.org/wiki/Audit) functions precisely to guard against their own internal failing [processes](/tags/Process-Risk) and [Agency Risk](/tags/Agency-Risk). Audits could be around software tools, processes, practices, quality and so on. Practices such as [Continuous Improvement](https://en.wikipedia.org/wiki/Continual_improvement_process) and [Total Quality Management](https://en.wikipedia.org/wiki/Total_quality_management) also figure here.
### Scanning The Operational Context
-There are plenty of [Hidden Risks](../thinking/Glossary.md#hidden-risk) within the operation's environment. These change all the time in response to economic, legal or political change. In order to manage a risk, you have to uncover it, so part of [Operations Management](#operations-management) is to look for trouble.
+There are plenty of [Hidden Risks](/thinking/Glossary.md#hidden-risk) within the operation's environment. These change all the time in response to economic, legal or political change. In order to manage a risk, you have to uncover it, so part of [Operations Management](#operations-management) is to look for trouble.
-- **Environmental Scanning** is all about trying to determine which changes in the environment are going to impact your operation. Here we are trying to determine the level of [Dependency Risk](Dependency-Risk.md) we face for external dependencies, such as suppliers, customers, markets and regulation. Tools like [PEST](https://en.wikipedia.org/wiki/PEST_analysis) are relevant, as is
+- **Environmental Scanning** is all about trying to determine which changes in the environment are going to impact your operation. Here we are trying to determine the level of [Dependency Risk](/tags/Dependency-Risk) we face for external dependencies, such as suppliers, customers, markets and regulation. Tools like [PEST](https://en.wikipedia.org/wiki/PEST_analysis) are relevant, as is
- **[Penetration Testing](https://en.wikipedia.org/wiki/Penetration_test)**: looking for security weaknesses within the operation. See [OWASP](https://en.wikipedia.org/wiki/OWASP) for examples.
-- **[Vulnerability Management](https://en.wikipedia.org/wiki/Vulnerability_management)** is about keeping up-to-date with vulnerabilities in [Software Dependencies](Software-Dependency-Risk.md).
+- **[Vulnerability Management](https://en.wikipedia.org/wiki/Vulnerability_management)** is about keeping up-to-date with vulnerabilities in [Software Dependencies](/tags/Software-Dependency-Risk).
## Planning
@@ -115,20 +115,20 @@ As the diagram above shows, we can bring [Planning](#planning) to bear on depend
![Design and Change Activities](/img/generated/risks/operational/design-change.png)
-Since our operation exists in a world of risks like [Red Queen Risk](Scarcity-Risk.md#red-queen-risk) and [Feature Drift Risk](Feature-Risk.md#feature-drift-risk), we would expect that the output of our [Planning](#planning) actions would result in changes to our operation.
+Since our operation exists in a world of risks like [Red Queen Risk](/tags/Red-Queen-Risk) and [Feature Drift Risk](/tags/Feature-Drift-Risk), we would expect that the output of our [Planning](#planning) actions would result in changes to our operation.
While _planning_ is a day-to-day operational feedback loop, _design_ is a longer feedback loop changing not just the parameters of the operation, but the operation itself.
-You might think that for an IT operation, tasks like [Design](#design) belong within a separate "Development" function within an organisation. Traditionally, this might have been the case. However separating Development from Operations implies [Boundary Risk](Boundary-Risk.md) between these two functions. For example, the developers might employ different tools, equipment and processes to the Operations team resulting in a mismatch when software is delivered.
+You might think that for an IT operation, tasks like [Design](#design) belong within a separate "Development" function within an organisation. Traditionally, this might have been the case. However separating Development from Operations implies [Boundary Risk](/tags/Boundary-Risk) between these two functions. For example, the developers might employ different tools, equipment and processes to the Operations team resulting in a mismatch when software is delivered.
-In recent years the [DevOps](https://en.wikipedia.org/wiki/DevOps) movement has brought this [Boundary Risk](Boundary-Risk.md) into sharper focus. This specifically means:
+In recent years the [DevOps](https://en.wikipedia.org/wiki/DevOps) movement has brought this [Boundary Risk](/tags/Boundary-Risk) into sharper focus. This specifically means:
- Using code to automate previously manual Operations functions, like monitoring and releasing.
- Involving Operations in the planning and design, so that the delivered software is optimised for the environment it runs in.
## Improvement
-No system can be perfect, and after it meets the real world, we will want to improve it over time. But [Operational Risk](Operational-Risk.md) includes an element of [Trust & Belief Risk](Communication-Risk.md#trust--belief-risk): we have a _reputation_ and the good will of our customers to consider when we make improvements. Because this is very hard to rebuild, we should consider this before releasing software that might not live up to expectations.
+No system can be perfect, and after it meets the real world, we will want to improve it over time. But [Operational Risk](/tags/Operational-Risk) includes an element of [Trust & Belief Risk](/tags/Trust-And-Belief-Risk): we have a _reputation_ and the good will of our customers to consider when we make improvements. Because this is very hard to rebuild, we should consider this before releasing software that might not live up to expectations.
So there is a tension between "you only get one chance to make a first impression" and "gilding the lily" (perfectionism). In the past I've seen this stated as _pressure to ship vs pressure to improve_.
@@ -136,10 +136,10 @@ So there is a tension between "you only get one chance to make a first impressio
A Risk-First re-framing of this (as shown in the diagram above) might be the balance between:
-- The perceived [Scarcity Risks](Scarcity-Risk.md) (such as funding, time available, etc) of staying in development (pressure to ship).
-- The perceived [Trust & Belief Risk](Communication-Risk.md#trust--belief-risk), [Feature Risk](Feature-Risk.md) and [Operational Risk](Operational-Risk.md) of going to production (pressure to improve).
+- The perceived [Scarcity Risks](/tags/Scarcity-Risk) (such as funding, time available, etc) of staying in development (pressure to ship).
+- The perceived [Trust & Belief Risk](/tags/Trust-And-Belief-Risk), [Feature Risk](/tags/Feature-Risk) and [Operational Risk](/tags/Operational-Risk) of going to production (pressure to improve).
-The "should we ship?" decision is therefore a complex one. In [Meeting Reality](../thinking/Meeting-Reality.md), we discussed that it's better to do this "sooner, more frequently, in smaller chunks and with feedback". We can meet [Operational Risk](Operational-Risk.md) _on our own terms_ by doing so:
+The "should we ship?" decision is therefore a complex one. In [Meeting Reality](/thinking/Meeting-Reality.md), we discussed that it's better to do this "sooner, more frequently, in smaller chunks and with feedback". We can meet [Operational Risk](/tags/Operational-Risk) _on our own terms_ by doing so:
|Meet Reality... |Techniques |
|----------------------------|----------------------------------------------------------------------|
@@ -152,7 +152,7 @@ The "should we ship?" decision is therefore a complex one. In [Meeting Reality]
## The End Of The Road
-In a way, [actions](../thinking/Glossary.md#taking-action) like **Design** and **Improvement** bring us right back to where we started from: identifying [Dependency Risks](Dependency-Risk.md), [Feature Risks](Feature-Risk.md) and [Complexity Risks](Complexity-Risk.md) that hinder our operation, and mitigating them through actions like _software development_.
+In a way, [actions](/thinking/Glossary.md#taking-action) like **Design** and **Improvement** bring us right back to where we started from: identifying [Dependency Risks](/tags/Dependency-Risk), [Feature Risks](/tags/Feature-Risk) and [Complexity Risks](/tags/Complexity-Risk) that hinder our operation, and mitigating them through actions like _software development_.
Our safari of risk is finally complete: it's time to reflect on what we've seen in the next section, [Staging and Classifying](Staging-And-Classifying.md).
\ No newline at end of file
diff --git a/docs/risks/Risk-Landscape.md b/docs/risks/Risk-Landscape.md
index cd60ed21e..4e4ac4760 100644
--- a/docs/risks/Risk-Landscape.md
+++ b/docs/risks/Risk-Landscape.md
@@ -46,19 +46,19 @@ Below is a table outlining the different risks we'll see. There _is_ an order t
|Risk | Description |
|----------------|--------------------------|
-|[Feature Risk](Feature-Risk.md) |When you haven't built features the market needs, or the features you have built contain bugs, or the market changes underneath you. |
-|[Communication Risk](Communication-Risk.md) |Risks associated with getting messages heard and understood.|
-|[Complexity Risk](Complexity-Risk.md) |Your software is so complex it makes it hard to change, understand, or run. |
-|[Dependency Risk](Dependency-Risk.md) |Risks of depending on other people, products, software, functions, etc. This is a general look at dependencies, before diving into specifics like...|
-|[Scarcity Risk](Scarcity-Risk.md) |Risks associated with having limited time, money or some other resource.|
-|[Deadline Risk](Deadline-Risk.md) |The risk of having a date to hit.|
-|[Software Dependency Risk](Software-Dependency-Risk.md)|The risk of depending on a software library, service or function.|
-|[Process Risk](Process-Risk.md) |When you depend on a business process, or human process to give you something you need.|
-|[Boundary Risk](Boundary-Risk.md) |Risks due to making decisions that limit your choices later on. Sometimes, you go the wrong way on the [Risk Landscape](Risk-Landscape.md) and it's hard to get back to where you want to be.|
-|[Agency Risk](Agency-Risk.md) |Risks that staff have their own [Goals](../thinking/Glossary.md#goal), which might not align with those of the project or team.|
-|[Coordination Risk](Coordination-Risk.md) |Risks due to the fact that systems contain multiple agents, which need to work together.|
-|[Map And Territory Risk](Map-And-Territory-Risk.md) |Risks due to the fact that people don't see the world as it really is. (After all, they're working off different, imperfect [Internal Models](../thinking/Glossary.md#internal-model).)|
-|[Operational Risk](Operational-Risk.md) |Software is embedded in a system containing people, buildings, machines and other services. Operational risk considers this wider picture of risk associated with running a software service or business in the real world.|
+|[Feature Risk](/tags/Feature-Risk) |When you haven't built features the market needs, or the features you have built contain bugs, or the market changes underneath you. |
+|[Communication Risk](/tags/Communication-Risk) |Risks associated with getting messages heard and understood.|
+|[Complexity Risk](/tags/Complexity-Risk) |Your software is so complex it makes it hard to change, understand, or run. |
+|[Dependency Risk](/tags/Dependency-Risk) |Risks of depending on other people, products, software, functions, etc. This is a general look at dependencies, before diving into specifics like...|
+|[Scarcity Risk](/tags/Scarcity-Risk) |Risks associated with having limited time, money or some other resource.|
+|[Deadline Risk](/tags/Deadline-Risk) |The risk of having a date to hit.|
+|[Software Dependency Risk](/tags/Software-Dependency-Risk)|The risk of depending on a software library, service or function.|
+|[Process Risk](/tags/Process-Risk) |When you depend on a business process, or human process to give you something you need.|
+|[Boundary Risk](/tags/Boundary-Risk) |Risks due to making decisions that limit your choices later on. Sometimes, you go the wrong way on the [Risk Landscape](Risk-Landscape.md) and it's hard to get back to where you want to be.|
+|[Agency Risk](/tags/Agency-Risk) |Risks that staff have their own [Goals](/thinking/Glossary.md#goal), which might not align with those of the project or team.|
+|[Coordination Risk](/tags/Coordination-Risk) |Risks due to the fact that systems contain multiple agents, which need to work together.|
+|[Map And Territory Risk](/tags/Map-And-Territory-Risk) |Risks due to the fact that people don't see the world as it really is. (After all, they're working off different, imperfect [Internal Models](/thinking/Glossary.md#internal-model).)|
+|[Operational Risk](/tags/Operational-Risk) |Software is embedded in a system containing people, buildings, machines and other services. Operational risk considers this wider picture of risk associated with running a software service or business in the real world.|
After the last stop on the tour, in [Staging and Classifying](Staging-And-Classifying.md) we'll have a recap about what we've seen and make some guesses about how things fit together.
@@ -90,11 +90,11 @@ In the financial crisis of 2007, these models of risk didn't turn out to be much
- This caused credit defaults (the thing that [Credit Risk](https://en.wikipedia.org/wiki/Credit_risk) measures were meant to guard against) even though the banks _technically_ were solvent.
- Once credit defaults started, this worried investors in the banks, which had massive [Market Risk](https://en.wikipedia.org/wiki/Market_risk) impacts that none of the models foresaw.
-All the [Risks](../thinking/Glossary.md#risk) were [correlated](https://www.investopedia.com/terms/c/correlation.asp). That is, they were affected by the _same underlying events_, or _each other_.
+All the [Risks](/thinking/Glossary.md#risk) were [correlated](https://www.investopedia.com/terms/c/correlation.asp). That is, they were affected by the _same underlying events_, or _each other_.
![Causation shown on a Risk-First Diagram. More complexity is likely to lead to more Operational Risk](/img/generated/risks/landscape/causation.png)
-It's like this with software risks, too, sadly. For example, [Operational Risk](Operational-Risk.md) is going to be heavily correlated with [Complexity Risk](Complexity-Risk.md). Just like a machine, the more complex it is, the more likely it is to fail, and the more likely it will fail in some unexpected, difficult-to-diagnose way.
+It's like this with software risks, too, sadly. For example, [Operational Risk](/tags/Operational-Risk) is going to be heavily correlated with [Complexity Risk](/tags/Complexity-Risk). Just like a machine, the more complex it is, the more likely it is to fail, and the more likely it will fail in some unexpected, difficult-to-diagnose way.
In the Risk-First diagrams, we will sometimes show correlation or causation with an arrow, like in the diagram above.
@@ -104,4 +104,4 @@ Just as naturalists are able to head out and find new species of insects and pla
It's a big, crazy, evolving world of software. Help to fill in the details. Report back what you find.
-So, let's get started with [Feature Risk](Feature-Risk.md).
+So, let's get started with [Feature Risk](/tags/Feature-Risk).
diff --git a/docs/risks/Staging-And-Classifying.md b/docs/risks/Staging-And-Classifying.md
index 16db7491c..0cf67dab2 100644
--- a/docs/risks/Staging-And-Classifying.md
+++ b/docs/risks/Staging-And-Classifying.md
@@ -20,7 +20,7 @@ But if we are good collectors, then before we finish we should _[Stage](https://
## Towards A "Periodic Table" Of Risks
-As we said [at the start](A-Pattern-Language.md), Risk-First is all about developing _A Pattern Language_. We can use the terms like "[Feature Risk](Feature-Risk.md)" or "[Learning Curve Risk](Communication-Risk.md#learning-curve-risk)" to explain phenomena we see on software projects. If we want to [De-Risk](../thinking/De-Risking.md) our work, we need this power of explanation so we can talk about how to go about it.
+As we said [at the start](A-Pattern-Language.md), Risk-First is all about developing _A Pattern Language_. We can use the terms like "[Feature Risk](/tags/Feature-Risk)" or "[Learning Curve Risk](/tags/Learning-Curve-Risk)" to explain phenomena we see on software projects. If we want to [De-Risk](/thinking/De-Risking.md) our work, we need this power of explanation so we can talk about how to go about it.
![Periodic Table of Risks, Horizontal](/img/generated/staging-and-classifying/periodic-horizontal.png)
@@ -30,7 +30,7 @@ If you've been reading closely, you'll notice that a number of themes come up ag
## The Power Of Abstractions
-[Abstraction](../thinking/Glossary.md#abstraction) appears as a concept continually: in [Communication Risk](Communication-Risk.md), [Complexity Metrics](Complexity-Risk.md#kolmogorov-complexity), [Map and Territory Risk](Map-And-Territory-Risk.md) or how it causes [Boundary Risk](Boundary-Risk.md). We've looked at some complicated examples of abstractions, such as [network protocols](Communication-Risk.md#protocols), [dependencies on technology](Software-Dependency-Risk.md) or [Business Processes](Process-Risk.md#the-purpose-of-process).
+[Abstraction](/thinking/Glossary.md#abstraction) appears as a concept continually: in [Communication Risk](/tags/Communication-Risk), [Complexity Metrics](Complexity-Risk.md#kolmogorov-complexity), [Map and Territory Risk](/tags/Map-And-Territory-Risk) or how it causes [Boundary Risk](/tags/Boundary-Risk). We've looked at some complicated examples of abstractions, such as [network protocols](Communication-Risk.md#protocols), [dependencies on technology](/tags/Software-Dependency-Risk) or [Business Processes](Process-Risk.md#the-purpose-of-process).
Let's now _generalize_ what is happening with abstraction. To do this, we'll consider the simplest example of abstraction: _naming a pattern_ of behaviour we see in the real world, such as "Binge Watching" or "Remote Working", or naming a category of insects as "Beetles".
@@ -40,10 +40,10 @@ Let's now _generalize_ what is happening with abstraction. To do this, we'll co
As shown in the above diagram, _using an abstraction you already know_ means:
- - **Mitigating [Feature Risk](Feature-Risk.md)**: because the abstraction is providing you with something _useful_. For example, using the word "London" allows you to refer to a whole (but slightly non-specific) geographic area.
- - **Accepting [Communication Risk](Communication-Risk.md)**: because if you are using the abstraction in conversation the people you are using it with _need to understand it too_.
- - **Accepting [Map and Territory Risk](Map-And-Territory-Risk.md)**: because the abstraction is a simplification and not the actual thing itself.
- - **Living with [Dependency Risks](Dependency-Risk.md)**: we depend on a word in our language (or a function in our library, or a service on the Internet). But words are [unreliable](Dependency-Risk.md#reliability-risk). Language _changes_ and _evolves_, and the words you are using now might not always mean what you want them to mean. (Software too changes and evolves: We've seen this in [Red Queen Risk](Scarcity-Risk.md#red-queen-risk) and [Feature Drift Risk](Feature-Risk.md#feature-drift-risk).)
+ - **Mitigating [Feature Risk](/tags/Feature-Risk)**: because the abstraction is providing you with something _useful_. For example, using the word "London" allows you to refer to a whole (but slightly non-specific) geographic area.
+ - **Accepting [Communication Risk](/tags/Communication-Risk)**: because if you are using the abstraction in conversation the people you are using it with _need to understand it too_.
+ - **Accepting [Map and Territory Risk](/tags/Map-And-Territory-Risk)**: because the abstraction is a simplification and not the actual thing itself.
+ - **Living with [Dependency Risks](/tags/Dependency-Risk)**: we depend on a word in our language (or a function in our library, or a service on the Internet). But words are [unreliable](/tags/Reliability-Risk). Language _changes_ and _evolves_, and the words you are using now might not always mean what you want them to mean. (Software too changes and evolves: We've seen this in [Red Queen Risk](/tags/Red-Queen-Risk) and [Feature Drift Risk](/tags/Feature-Drift-Risk).)
### Inventing A New Abstraction
@@ -51,10 +51,10 @@ As shown in the above diagram, _using an abstraction you already know_ means:
As shown in the above diagram, _inventing a new abstraction_ means:
-- **Mitigating [Feature Risk](Feature-Risk.md).** By _giving a name to something_ (or building a new product, or a way of working) you are offering up something that someone else can use. This should mitigate [Feature Risk](Feature-Risk.md) in the sense that other people can choose to use your it, if it fits their requirements.
-- **Creating a [Protocol](Communication-Risk.md#protocols).** Introducing _new words to a language_ creates [Protocol Risk](Communication-Risk.md#protocol-risk) as most people won't know what it means.
-- **Increasing [Complexity Risk](Complexity-Risk.md).** Because, the more words we have, the more complex the language is.
-- **Creating the opportunity for [Boundary Risk](Boundary-Risk.md).** By naming something, you _implicitly_ create a boundary, because the world is now divided into "things which _are_ X" and "things which _are not_ X". _Boundary Risk arises from abstractions._
+- **Mitigating [Feature Risk](/tags/Feature-Risk).** By _giving a name to something_ (or building a new product, or a way of working) you are offering up something that someone else can use. This should mitigate [Feature Risk](/tags/Feature-Risk) in the sense that other people can choose to use your it, if it fits their requirements.
+- **Creating a [Protocol](Communication-Risk.md#protocols).** Introducing _new words to a language_ creates [Protocol Risk](/tags/Protocol-Risk) as most people won't know what it means.
+- **Increasing [Complexity Risk](/tags/Complexity-Risk).** Because, the more words we have, the more complex the language is.
+- **Creating the opportunity for [Boundary Risk](/tags/Boundary-Risk).** By naming something, you _implicitly_ create a boundary, because the world is now divided into "things which _are_ X" and "things which _are not_ X". _Boundary Risk arises from abstractions._
### Learning A New Abstraction
@@ -62,27 +62,27 @@ As shown in the above diagram, _inventing a new abstraction_ means:
As shown in the above diagram, _learning a new abstraction_ means:
- - **Overcoming a [Learning Curve](Communication-Risk.md#learning-curve-risk)**: because you have to _learn_ a name in order to use it (whether it is the name of a function, a dog, or someone at a party).
- - **Accepting [Boundary Risks](Boundary-Risk.md).** Commitment to one abstraction over another means that you have the opportunity cost of the other abstractions that you could have used.
- - **Accepting [Map And Territory Risk](Map-And-Territory-Risk.md).** Because the word refers to the _concept_ of the thing, and _not the thing itself_.
+ - **Overcoming a [Learning Curve](/tags/Learning-Curve-Risk)**: because you have to _learn_ a name in order to use it (whether it is the name of a function, a dog, or someone at a party).
+ - **Accepting [Boundary Risks](/tags/Boundary-Risk).** Commitment to one abstraction over another means that you have the opportunity cost of the other abstractions that you could have used.
+ - **Accepting [Map And Territory Risk](/tags/Map-And-Territory-Risk).** Because the word refers to the _concept_ of the thing, and _not the thing itself_.
-Abstraction is everywhere and seems to be at the heart of what our brains do. But clearly, like [taking any other action](../thinking/Glossary.md#taking-action) there is always trade-off in terms of risk.
+Abstraction is everywhere and seems to be at the heart of what our brains do. But clearly, like [taking any other action](/thinking/Glossary.md#taking-action) there is always trade-off in terms of risk.
## Your Feature Risk is Someone Else's Dependency Risk
![Features And Dependencies](/img/generated/staging-and-classifying/features-and-dependencies.png)
-In the [Feature Risk](Feature-Risk.md) section, we looked at the problems of _supplying something for a client to use as a dependency_: you've got to satisfy a demand ([Market Risk](Feature-Risk.md#market-risk)) and service a segment of the user community ([Feature Access Risk](Feature-Risk.md#feature-access-risk)).
+In the [Feature Risk](/tags/Feature-Risk) section, we looked at the problems of _supplying something for a client to use as a dependency_: you've got to satisfy a demand ([Market Risk](/tags/Market-Risk)) and service a segment of the user community ([Feature Access Risk](/tags/Feature-Access-Risk)).
-However over the rest of the [Dependency Risk](Dependency-Risk.md) sections we looked at this from the point of view of _being a client of someone else_: you want to find trustworthy, reliable dependencies that don't give up when you least want them to.
+However over the rest of the [Dependency Risk](/tags/Dependency-Risk) sections we looked at this from the point of view of _being a client of someone else_: you want to find trustworthy, reliable dependencies that don't give up when you least want them to.
-So [Feature Risk](Feature-Risk.md) and [Dependency Risk](Dependency-Risk.md) are _two sides of the same coin_, they capture the risks in _demand_ and _supply_.
+So [Feature Risk](/tags/Feature-Risk) and [Dependency Risk](/tags/Dependency-Risk) are _two sides of the same coin_, they capture the risks in _demand_ and _supply_.
-As shown in the diagram above, relationships of features/dependencies are the basis of [Supply Chains](https://en.wikipedia.org/wiki/Supply_chain) and the world-wide network of goods and services that forms the modern economy. The incredible complexity of this network mean incredible [Complexity Risk](Complexity-Risk.md), too. Humans are masters at [coordinating](Coordination-Risk.md) and managing our dependencies.
+As shown in the diagram above, relationships of features/dependencies are the basis of [Supply Chains](https://en.wikipedia.org/wiki/Supply_chain) and the world-wide network of goods and services that forms the modern economy. The incredible complexity of this network mean incredible [Complexity Risk](/tags/Complexity-Risk), too. Humans are masters at [coordinating](/tags/Coordination-Risk) and managing our dependencies.
## The Work Continues
-On this journey around the [Risk Landscape](Risk-Landscape.md) we've collected a (hopefully) good, representative sample of [Risks](../thinking/Glossary.md#risk) and where to find them. But there are more out there. How many of these have you seen on your projects? What is missing? What is wrong?
+On this journey around the [Risk Landscape](Risk-Landscape.md) we've collected a (hopefully) good, representative sample of [Risks](/thinking/Glossary.md#risk) and where to find them. But there are more out there. How many of these have you seen on your projects? What is missing? What is wrong?
Please help by reporting back what you find.
diff --git a/docs/risks/Start.md b/docs/risks/Start.md
index 36d8f0606..5df5b929e 100644
--- a/docs/risks/Start.md
+++ b/docs/risks/Start.md
@@ -19,7 +19,7 @@ Much of the content of [Risk-First](https://riskfirst.org) is a collection of [R
Here, we're going to take you through the various types of Risk you will face on every software project.
-In [Thinking Risk-First](../thinking/One-Size-Fits-No-One.md), we saw how _Lean Software Development_ owed its existence to production-line manufacturing techniques developed at Toyota. And we saw that the _Waterfall_ approach originally came from engineering. If Risk-First is anything, it's about applying the techniques of _Risk Management_ to the discipline of _Software Development_ (there's nothing new under the sun, after all).
+In [Thinking Risk-First](/thinking/One-Size-Fits-No-One.md), we saw how _Lean Software Development_ owed its existence to production-line manufacturing techniques developed at Toyota. And we saw that the _Waterfall_ approach originally came from engineering. If Risk-First is anything, it's about applying the techniques of _Risk Management_ to the discipline of _Software Development_ (there's nothing new under the sun, after all).
One key activity of Risk Management we haven't discussed yet is _categorizing_ risks. So, this track of Risk-First is all about developing categories of risks for use in Software Development.
diff --git a/docs/thinking/A-Conversation.md b/docs/thinking/A-Conversation.md
index 87454420a..9c564c89b 100644
--- a/docs/thinking/A-Conversation.md
+++ b/docs/thinking/A-Conversation.md
@@ -27,7 +27,7 @@ Uniquely as a species, humans are fascinated by story-telling _precisely because
As humans, we all bring our own experiences to bear on the best way to solve problems. Sometimes, experience tells us that solving a problem one way will create a new _worse_ problem.
-It's key that we share our experiences to improve everyone's [Internal Model](../thinking/Glossary.md#internal-model)s.
+It's key that we share our experiences to improve everyone's [Internal Model](/thinking/Glossary.md#internal-model)s.
## A Risk Conversation
@@ -43,9 +43,9 @@ Synergy's release process means that the app-store submission must happen in a f
**Eve**: Well, you know Synergy did their review and asked us to upgrade our Web Server to only allow TLS version 1.1 and greater?
-**Bob**: Yes, I remember: we discussed it as a team and thought the simplest thing would be to change the security settings on the Web Server, but we all felt it was pretty risky. We decided that in order to flush out [Hidden Risk](../thinking/Glossary.md#hidden-risk), we'd upgrade our entire production site to use it _now_, rather than wait for the app launch. **(1)**
+**Bob**: Yes, I remember: we discussed it as a team and thought the simplest thing would be to change the security settings on the Web Server, but we all felt it was pretty risky. We decided that in order to flush out [Hidden Risk](/thinking/Glossary.md#hidden-risk), we'd upgrade our entire production site to use it _now_, rather than wait for the app launch. **(1)**
-**Eve**: Right, and it _did_ flush out [Hidden Risk](../thinking/Glossary.md#hidden-risk): some of our existing software broke on Windows 7, which sadly we still need to support. So, we had to back it out.
+**Eve**: Right, and it _did_ flush out [Hidden Risk](/thinking/Glossary.md#hidden-risk): some of our existing software broke on Windows 7, which sadly we still need to support. So, we had to back it out.
**Bob**: Ok, well I guess it's good we found out _now_. It would have been a disaster to discover this after the app had gone live on Synergy's app-store. **(2)**
@@ -55,9 +55,9 @@ Synergy's release process means that the app-store submission must happen in a f
**Eve**: How about we run two web-servers? One for the existing content, and one for our new Synergy app? We'd have to get a new external IP address, handle DNS setup, change the firewalls, and then deploy a new version of the Web Server software on the production boxes... **(3)**
-**Bob**: This feels like there'd be a lot of [Attendant Risk](../thinking/Glossary.md#attendant-risk): we're adding [Complexity Risk](../risks/Complexity-Risk.md) to our estate, and all of this needs to be handled by the Networking Team, so we're picking up a lot of [Process Risk](../risks/Process-Risk.md). I'm also worried that there are too many steps here, and we're going to discover loads of [Hidden Risks](../thinking/Glossary.md#hidden-risk) as we go. **(4)**
+**Bob**: This feels like there'd be a lot of [Attendant Risk](/thinking/Glossary.md#attendant-risk): we're adding [Complexity Risk](/tags/Complexity-Risk) to our estate, and all of this needs to be handled by the Networking Team, so we're picking up a lot of [Process Risk](/tags/Process-Risk). I'm also worried that there are too many steps here, and we're going to discover loads of [Hidden Risks](/thinking/Glossary.md#hidden-risk) as we go. **(4)**
-**Eve**: Well, you're correct on the first one. But, I've done this before not that long ago for a Chinese project, so I know the process - we shouldn't run into any new [Hidden Risk](../thinking/Glossary.md#hidden-risk). **(4)**
+**Eve**: Well, you're correct on the first one. But, I've done this before not that long ago for a Chinese project, so I know the process - we shouldn't run into any new [Hidden Risk](/thinking/Glossary.md#hidden-risk). **(4)**
**Bob**: OK, fair enough. But isn't there something simpler we can do? Maybe some settings in the Web Server? **(4)**
@@ -65,7 +65,7 @@ Synergy's release process means that the app-store submission must happen in a f
**Bob**: OK, and upgrading to Apache is a _big_ risk, right? We'd have to migrate all of our configuration... **(4)**
-**Eve**: Yes, let's not go there. So, _changing_ the settings on Baroque, we have the risk that it's not supported by the software and we're back where we started. Also, if we isolate the Synergy app stuff now, we can mess around with it at any point in future, which is a big win in case there are other [Hidden Risks](../thinking/Glossary.md#hidden-risk) with the security changes that we don't know about yet. **(5)**
+**Eve**: Yes, let's not go there. So, _changing_ the settings on Baroque, we have the risk that it's not supported by the software and we're back where we started. Also, if we isolate the Synergy app stuff now, we can mess around with it at any point in future, which is a big win in case there are other [Hidden Risks](/thinking/Glossary.md#hidden-risk) with the security changes that we don't know about yet. **(5)**
**Bob**: OK, I can see that buys us something, but time is really short and we have holidays coming up.
diff --git a/docs/thinking/A-Simple-Scenario.md b/docs/thinking/A-Simple-Scenario.md
index af2b7a07e..ed4d1bbd0 100644
--- a/docs/thinking/A-Simple-Scenario.md
+++ b/docs/thinking/A-Simple-Scenario.md
@@ -29,45 +29,45 @@ For a moment, forget about software completely and think about _any endeavour at
## Goal In Mind
-Now, in this endeavour, we want to be successful. That is to say, we have a **[Goal](../thinking/Glossary.md#goal)** in mind: we want our friends to go home satisfied after a decent meal and not to feel hungry. As a bonus, we might also want to spend time talking with them before and during the meal. So, now to achieve our [Goal](../thinking/Glossary.md#goal) we *probably* have to do some tasks.
+Now, in this endeavour, we want to be successful. That is to say, we have a **[Goal](/thinking/Glossary.md#goal)** in mind: we want our friends to go home satisfied after a decent meal and not to feel hungry. As a bonus, we might also want to spend time talking with them before and during the meal. So, now to achieve our [Goal](/thinking/Glossary.md#goal) we *probably* have to do some tasks.
-Since our goal only exists _in our head_, we can say it is part of our **[Internal Model](../thinking/Glossary.md#internal-model)** of the world. That is, the model we have of reality. This model extends to _predicting what will happen_.
+Since our goal only exists _in our head_, we can say it is part of our **[Internal Model](/thinking/Glossary.md#internal-model)** of the world. That is, the model we have of reality. This model extends to _predicting what will happen_.
If we do nothing, our friends will turn up and maybe there's nothing in the house for them to eat. Or maybe, the thing that you're going to cook is going to take hours and they'll have to sit around and wait for you to cook it and they'll leave before it's ready. Maybe you'll be some ingredients short, or maybe you're not confident of the steps to prepare the meal and you're worried about messing it all up.
## Attendant Risks
-These _nagging doubts_ that are going through your head are what I'll call the [Attendant Risks](../thinking/Glossary.md#attendant-risk): they're the ones that will occur to you as you start to think about what will happen.
+These _nagging doubts_ that are going through your head are what I'll call the [Attendant Risks](/thinking/Glossary.md#attendant-risk): they're the ones that will occur to you as you start to think about what will happen.
![Goal, with the risks you know about](/img/generated/introduction/goal_in_mind.png)
When we go about preparing for this wonderful evening, we can choose to deal with these risks: shop for the ingredients in advance, prepare parts of the meal and maybe practice the cooking in advance. Or, we can wing it and sometimes we'll get lucky.
-How much effort we expend on these [Attendant Risks](../thinking/Glossary.md#attendant-risk) depends on how big we think they are. For example, if you know there's a 24-hour shop, you'll probably not worry too much about getting the ingredients well in advance (although, the shop _could still be closed_).
+How much effort we expend on these [Attendant Risks](/thinking/Glossary.md#attendant-risk) depends on how big we think they are. For example, if you know there's a 24-hour shop, you'll probably not worry too much about getting the ingredients well in advance (although, the shop _could still be closed_).
## Hidden Risks
-[Attendant Risks](../thinking/Glossary.md#attendant-risk) are risks you are aware of. You may not be able to exactly _quantify_ them, but you know they exist. But there are also **[Hidden Risks](../thinking/Glossary.md#attendant-risk)** that you _don't_ know about: if you're poaching eggs for dinner, perhaps you didn't know that fresh eggs poach best. Donald Rumsfeld famously called these kinds of risks "Unknown Unknowns":
+[Attendant Risks](/thinking/Glossary.md#attendant-risk) are risks you are aware of. You may not be able to exactly _quantify_ them, but you know they exist. But there are also **[Hidden Risks](/thinking/Glossary.md#attendant-risk)** that you _don't_ know about: if you're poaching eggs for dinner, perhaps you didn't know that fresh eggs poach best. Donald Rumsfeld famously called these kinds of risks "Unknown Unknowns":
> "Reports that say that something hasn't happened are always interesting to me, because as we know, there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns—the ones we don't know we don't know. And if one looks throughout the history of our country and other free countries, it is the latter category that tend to be the difficult ones." - [Donald Rumsfeld, _Wikipedia_](https://en.wikipedia.org/wiki/There_are_known_knowns)
![Goal, the risks you know about and the ones you don't](/img/generated/introduction/hidden_risks.png)
-Different people evaluate risks differently and they'll also _know_ about different risks. What is an [Attendant Risk](../thinking/Glossary.md#attendant-risk) for one person is a [Hidden Risk](../thinking/Glossary.md#attendant-risk) for another.
+Different people evaluate risks differently and they'll also _know_ about different risks. What is an [Attendant Risk](/thinking/Glossary.md#attendant-risk) for one person is a [Hidden Risk](/thinking/Glossary.md#attendant-risk) for another.
Which risks we know about depends on our **knowledge** and **experience**, then. And that varies from person to person (or team to team).
## Taking Action and Meeting Reality
-As the dinner party gets closer, we make our preparations and the inadequacies of the [Internal Model](../thinking/Glossary.md#internal-model) become apparent. We learn what we didn't know and the [Hidden Risks](../thinking/Glossary.md#hidden-risk) reveal themselves. Other things we were worried about don't materialise. Things we thought would be minor risks turn out to be greater.
+As the dinner party gets closer, we make our preparations and the inadequacies of the [Internal Model](/thinking/Glossary.md#internal-model) become apparent. We learn what we didn't know and the [Hidden Risks](/thinking/Glossary.md#hidden-risk) reveal themselves. Other things we were worried about don't materialise. Things we thought would be minor risks turn out to be greater.
![How Taking Action affects Reality, and also changes your Internal Model](/img/generated/introduction/model_vs_reality.png)
-Our model is forced to [Meet Reality](../thinking/Glossary.md#meet-reality), and the model changes, forcing us to deal with these risks, as shown in the diagram above.
+Our model is forced to [Meet Reality](/thinking/Glossary.md#meet-reality), and the model changes, forcing us to deal with these risks, as shown in the diagram above.
-In Risk-First, whenever we try to _do something_ about a risk, it is called [Taking Action](../thinking/Glossary.md#taking-action). [Taking Action](../thinking/Glossary.md#taking-action) _changes_ reality, and with it your [Internal Model](../thinking/Glossary.md#internal-model) of the risks you're facing. That's because it's only by interacting with the world that we add knowledge to our [Internal Model](../thinking/Glossary.md#internal-model) about what works and what doesn't. Even something as passive as _checking the shop opening times_ is an action, and it improves on our [Internal Model](../thinking/Glossary.md#internal-model) of the world.
+In Risk-First, whenever we try to _do something_ about a risk, it is called [Taking Action](/thinking/Glossary.md#taking-action). [Taking Action](/thinking/Glossary.md#taking-action) _changes_ reality, and with it your [Internal Model](/thinking/Glossary.md#internal-model) of the risks you're facing. That's because it's only by interacting with the world that we add knowledge to our [Internal Model](/thinking/Glossary.md#internal-model) about what works and what doesn't. Even something as passive as _checking the shop opening times_ is an action, and it improves on our [Internal Model](/thinking/Glossary.md#internal-model) of the world.
-If we had a good [Internal Model](../thinking/Glossary.md#internal-model) and [took the right actions](../thinking/Glossary.md#taking-action), we should see positive outcomes. If we failed to manage the risks, or took inappropriate actions, we'll probably see negative outcomes.
+If we had a good [Internal Model](/thinking/Glossary.md#internal-model) and [took the right actions](/thinking/Glossary.md#taking-action), we should see positive outcomes. If we failed to manage the risks, or took inappropriate actions, we'll probably see negative outcomes.
## Why The New Terms?
@@ -79,4 +79,4 @@ I know that as a reader it's annoying to have to pick up new terminology. So yo
## On To Software?
-Here, we've introduced some new terms that we're going to use a lot: [Meet Reality](../thinking/Glossary.md#meet-reality), [Attendant Risk](../thinking/Glossary.md#attendant-risk), [Hidden Risk](../thinking/Glossary.md#attendant-risk), [Internal Model](../thinking/Glossary.md#internal-model), [Taking Action](../thinking/Glossary.md#taking-action) and, we've applied them in a simple scenario. Clearly, what we really want to get to is talking about software development, but first I want to dig a bit deeper into how we represent these ideas graphically, using [Risk-First Diagrams](Risk-First-Diagrams.md).
+Here, we've introduced some new terms that we're going to use a lot: [Meet Reality](/thinking/Glossary.md#meet-reality), [Attendant Risk](/thinking/Glossary.md#attendant-risk), [Hidden Risk](/thinking/Glossary.md#attendant-risk), [Internal Model](/thinking/Glossary.md#internal-model), [Taking Action](/thinking/Glossary.md#taking-action) and, we've applied them in a simple scenario. Clearly, what we really want to get to is talking about software development, but first I want to dig a bit deeper into how we represent these ideas graphically, using [Risk-First Diagrams](Risk-First-Diagrams.md).
diff --git a/docs/thinking/Cadence.md b/docs/thinking/Cadence.md
index 0550f2412..c9689d898 100644
--- a/docs/thinking/Cadence.md
+++ b/docs/thinking/Cadence.md
@@ -46,7 +46,7 @@ In a software development scenario, you should also test your model against real
This list is arranged so that at the top, we have the most visceral, most _real_ feedback loop, but at the same time, the slowest.
-At the bottom, a good IDE can inform you about errors in your [Internal Model](../thinking/Glossary.md#internal-model) in real time, by way of highlighting compilation errors . So, this is the fastest loop, but it's the most _limited_ reality.
+At the bottom, a good IDE can inform you about errors in your [Internal Model](/thinking/Glossary.md#internal-model) in real time, by way of highlighting compilation errors . So, this is the fastest loop, but it's the most _limited_ reality.
Imagine for a second that you had a special time-travelling machine. With it, you could make a change to your software, and get back a report from the future listing out all the issues people had faced using it over its lifetime, instantly.
diff --git a/docs/thinking/Consider-Payoff.md b/docs/thinking/Consider-Payoff.md
index 248711f32..8ebf5281c 100644
--- a/docs/thinking/Consider-Payoff.md
+++ b/docs/thinking/Consider-Payoff.md
@@ -84,7 +84,7 @@ Sometimes, there will be multiple _actions_ you could take on a project and you
- And, making a decision takes time, which could add risk to your schedule.
- And what's the risk if the decision doesn't get made?
-The fruits of this gambling are revealed when we [meet reality](../thinking/Glossary.md#meet-reality) and we can see whether our bets were worthwhile.
+The fruits of this gambling are revealed when we [meet reality](/thinking/Glossary.md#meet-reality) and we can see whether our bets were worthwhile.
Very occasionally, you'll be in a place where your hand is forced and you have to take one of only a handful of actions, or there is a binary decision. A so called "rock and a hard place". But as we'll see in the third example below, even here you can usually change the action (and therefore the payoff) in your favour.
@@ -94,17 +94,17 @@ YAGNI is an acronym for "You Aren't Gonna Need It":
> YAGNI originally is an acronym that stands for "You Aren't Gonna Need It". It is a mantra from Extreme Programming that's often used generally in agile software teams. It's a statement that some capability we presume our software needs in the future should not be built now because "you aren't gonna need it". - [YAGNI, _Martin Fowler_](https://www.martinfowler.com/bliki/Yagni.html)
-The idea makes sense: if you take on extra work that you don't need, _of course_ you'll be accreting risk - you're taking time away from sorting out the real problems! You'll also have a greater body of code to manage, which is [also a risk](../risks/Complexity-Risk.md).
+The idea makes sense: if you take on extra work that you don't need, _of course_ you'll be accreting risk - you're taking time away from sorting out the real problems! You'll also have a greater body of code to manage, which is [also a risk](/tags/Complexity-Risk).
But, there is always the opposite opinion: [You _Are_ Gonna Need It](http://wiki.c2.com/?YouAreGonnaNeedIt). As a simple example, we often add log statements in our code as we write it (so we can trace what happened when things go wrong), though following YAGNI strictly says we shouldn't.
-So which is right? We should conclude that we do the work _if there is a worthwhile [Payoff](../thinking/Glossary.md#payoff)_.
+So which is right? We should conclude that we do the work _if there is a worthwhile [Payoff](/thinking/Glossary.md#payoff)_.
- - Logging statements are _good_, because otherwise, you're increasing the risk that in production, no one will be able to understand [how the software went wrong](../risks/Dependency-Risk#invisibility-risk).
- - However, adding them takes time, which might [risk us not hitting our schedule](../risks/Scarcity-Risk.md#schedule-risk).
- - Also, we have to manage larger log files on our production systems. _Too much logging_ is just noise, and makes it harder to figure out what went wrong. This increases the risk that our software is [less transparent in how it works](../risks/Complexity-Risk.md).
+ - Logging statements are _good_, because otherwise, you're increasing the risk that in production, no one will be able to understand [how the software went wrong](/risks/Dependency-Risk#invisibility-risk).
+ - However, adding them takes time, which might [risk us not hitting our schedule](/tags/Schedule-Risk).
+ - Also, we have to manage larger log files on our production systems. _Too much logging_ is just noise, and makes it harder to figure out what went wrong. This increases the risk that our software is [less transparent in how it works](/tags/Complexity-Risk).
-So, it's a trade-off: continue adding logging statements so long as you feel that overall, the activity [pays off](../thinking/Glossary.md#payoff) reducing overall risk.
+So, it's a trade-off: continue adding logging statements so long as you feel that overall, the activity [pays off](/thinking/Glossary.md#payoff) reducing overall risk.
### Example 2: Over-Engineering
@@ -115,7 +115,7 @@ Some people would argue that YAGNI is really a weapon to combat over-engineering
- obsessing over metric perfection, such as going from 99% to 100% code-coverage.
- reaching for heavyweight libraries or tools to solve trivial issues.
-It's important to reflect on the fact that there are other factors at play here: people know they'll be judged on the quality of their work, don't want to make mistakes and might want to add tools or new skills to their CVs (all of which we'll cover in [Agency Risk](../risks/Agency-Risk.md). If you are running the development team, you need to be aware of this risk and work hard to minimise it.
+It's important to reflect on the fact that there are other factors at play here: people know they'll be judged on the quality of their work, don't want to make mistakes and might want to add tools or new skills to their CVs (all of which we'll cover in [Agency Risk](/tags/Agency-Risk). If you are running the development team, you need to be aware of this risk and work hard to minimise it.
![Over Engineering](/img/generated/introduction/over-engineering.svg)
@@ -202,34 +202,34 @@ At the same time, by adding "Could Possibly", Beck is encouraging us to go beyon
Our risk-centric view of this strategy would be:
-- Every action you take on a project has its own [Attendant Risks](../thinking/Glossary.md#attendant-risk).
-- The bigger or more complex the action, the more [Attendant Risk](../thinking/Glossary.md#attendant-risk) it'll have.
+- Every action you take on a project has its own [Attendant Risks](/thinking/Glossary.md#attendant-risk).
+- The bigger or more complex the action, the more [Attendant Risk](/thinking/Glossary.md#attendant-risk) it'll have.
- The reason you're taking action _at all_ is because you're trying to reduce risk elsewhere on the project.
-- Therefore, the best [Expected Value](Glossary.md#expected-value) is likely to be the action with the least [Attendant Risk](../thinking/Glossary.md#attendant-risk).
+- Therefore, the best [Expected Value](Glossary.md#expected-value) is likely to be the action with the least [Attendant Risk](/thinking/Glossary.md#attendant-risk).
- So, usually this is going to be the simplest thing.
-So, "Do The Simplest Thing That Could Possibly Work" is really a helpful guideline for Navigating the [Risk Landscape](../risks/Risk-Landscape.md), but this analysis shows clearly where it's left wanting:
+So, "Do The Simplest Thing That Could Possibly Work" is really a helpful guideline for Navigating the [Risk Landscape](/risks/Risk-Landscape.md), but this analysis shows clearly where it's left wanting:
- - _Don't_ do the simplest thing if there are other things with a better [Expected Value](../thinking/Glossary.md#expected-value) available.
+ - _Don't_ do the simplest thing if there are other things with a better [Expected Value](/thinking/Glossary.md#expected-value) available.
An example of where this might be the case, think about how you might write a big, complex function (for example, processing interest accrual on a loan). The _simplest thing_ might be to just write a single function and a few unit tests for it. However, a slightly _less simple thing_ that would work might be to decompose the function into multiple steps, each with its own unit tests. Perhaps you might have a step which calculates the number of days where interest is due (working days, avoiding bank holidays), another step that considers repayments, a step that works out different interest rates and so on.
![Different payoff for doing the simplest thing vs something slightly less simple with more effort](/img/generated/introduction/risk_landscape_4_simplest.png)
-Functional decomposition and extra testing might not be the _simplest thing_, but it might reduce risks in other ways - making the code easier to understand, easier to test and easier to modify in the future. So deciding up-front to accept this extra complexity and effort in exchange for the other benefits might seem like a better [Payoff](../thinking/Glossary.md#payoff) than the simplest thing.
+Functional decomposition and extra testing might not be the _simplest thing_, but it might reduce risks in other ways - making the code easier to understand, easier to test and easier to modify in the future. So deciding up-front to accept this extra complexity and effort in exchange for the other benefits might seem like a better [Payoff](/thinking/Glossary.md#payoff) than the simplest thing.
### Example 4: Continue Testing or Release?
You're on a project and you're faced with the decision - release now or do more User Acceptance Testing (UAT)?
-Obviously, in the ideal world, we want to get to the place on the [Risk Landscape](../thinking/Glossary.md#risk-landscape) where we have a tested, bug-free system in production. But we're not there yet, and we have funding pressure to get the software into the hands of some paying customers. But what if we disappoint the customers and create bad feeling? The table below shows an example:
+Obviously, in the ideal world, we want to get to the place on the [Risk Landscape](/thinking/Glossary.md#risk-landscape) where we have a tested, bug-free system in production. But we're not there yet, and we have funding pressure to get the software into the hands of some paying customers. But what if we disappoint the customers and create bad feeling? The table below shows an example:
|Risk Managed |Action |Attendant Risk |Payoff |
|----------------------|-----------------------------|-----------------------------------------|-------------------|
|Funding Risk |**Go Live** |Reputational Risk, Operational Risk |MEDIUM |
|Implementation Risk |**Another Round of UAT** |Worse Funding Risk, Operational Risk |LOW |
-This is (a simplification of) the dilemma of lots of software projects - _test further_, to reduce the risk of users discovering bugs ([Implementation Risk](../risks/Feature-Risk.md#implementation-risk)) which would cause us reputational damage, or _get the release done_ and reduce our [Funding Risk](../risks/Scarcity-Risk.md#funding-risk) by getting paying clients sooner.
+This is (a simplification of) the dilemma of lots of software projects - _test further_, to reduce the risk of users discovering bugs ([Implementation Risk](/tags/Implementation-Risk)) which would cause us reputational damage, or _get the release done_ and reduce our [Funding Risk](/tags/Funding-Risk) by getting paying clients sooner.
Lots of software projects end up in a phase of "release paralysis" - wanting things to be perfect before you show them to customers. But sometimes this places too much emphasis on preserving reputation over getting paying customers. Also, getting real customers is [meeting reality](Glossary.md#meet-reality) and will probably surface new [hidden risks](Glossary.md#hidden-risk) that are missing from the analysis.
@@ -237,9 +237,9 @@ Lots of software projects end up in a phase of "release paralysis" - wanting thi
An important take-away here is that you don't have to accept the dilemma as stated. You can change the actions to improve the payoff, and [meet reality more gradually](Meeting-Reality#the-cost-of-meeting-reality):
- - Start a closed [beta test](../practices/Glossary-Of-Practices.md#beta-test) with a group of friendly customers
- - Use [feature toggles](../practices/Glossary-Of-Practices.md#feature-toggle) to release only some components of the software
- - [Dog-food](../practices/Glossary-Of-Practices.md#dog-fooding) the software internally so you can find out whether it's useful in its current state.
+ - Start a closed [beta test](/practices/Glossary-Of-Practices.md#beta-test) with a group of friendly customers
+ - Use [feature toggles](/practices/Glossary-Of-Practices.md#feature-toggle) to release only some components of the software
+ - [Dog-food](/practices/Glossary-Of-Practices.md#dog-fooding) the software internally so you can find out whether it's useful in its current state.
A second approach is to improve the payoff of the losing outcomes. Here are some examples:
@@ -257,7 +257,7 @@ As we've seen, figuring out payoff is made more tricky because often the actions
Many Agile frameworks such as [Scrum](../bets/Purpose-Development-Team#case-2-scrum) place a lot of emphasis on estimating and time-boxing work: trying to work out when you're going to deliver something and sticking to it. But Risk-First is suggesting a totally different focus: factors like _time taken to deliver_ and _coordinating the completion of work_ are just risks to consider along with all the others.
-The most valuable project management skill is being able to chart a course which minimises risk. Sometimes, that will mean [hitting a deadline](../risks/Deadline-Risk.md), but equally it could be [reducing codebase complexity](../risks/Complexity-Risk.md), [making a feature more accessible](../risks/Feature-Risk.md#feature-access-risk) or [removing problematic dependencies](../risks/Software-Dependency-Risk.md).
+The most valuable project management skill is being able to chart a course which minimises risk. Sometimes, that will mean [hitting a deadline](/tags/Deadline-Risk), but equally it could be [reducing codebase complexity](/tags/Complexity-Risk), [making a feature more accessible](/tags/Feature-Access-Risk) or [removing problematic dependencies](/tags/Software-Dependency-Risk).
The most important skill is to be able to _weigh up the risks_, decide on a course of action that gives you the greatest expected value and look for ways of increasing the payoff of winning and losing.
diff --git a/docs/thinking/Crisis-Mode.md b/docs/thinking/Crisis-Mode.md
index 8d8feee7d..1b623a12d 100644
--- a/docs/thinking/Crisis-Mode.md
+++ b/docs/thinking/Crisis-Mode.md
@@ -63,7 +63,7 @@ Ideally, a methodology should be applicable at _any_ scale too:
- A department.
- An entire organisation.
-In practice, however, we usually find methodologies are tuned for certain scales. For example, [Extreme Programming (XP)](https://en.wikipedia.org/wiki/Extreme_programming) is designed for small, co-located teams and that's useful. But the fact it doesn't scale tells us something about it: chiefly, that it considers certain _kinds_ of risk, while ignoring others. At small scales XP works ok, but at larger scales other risks (such as team [Coordination Risk](../risks/Coordination-Risk.md)) increase too fast for it to work.
+In practice, however, we usually find methodologies are tuned for certain scales. For example, [Extreme Programming (XP)](https://en.wikipedia.org/wiki/Extreme_programming) is designed for small, co-located teams and that's useful. But the fact it doesn't scale tells us something about it: chiefly, that it considers certain _kinds_ of risk, while ignoring others. At small scales XP works ok, but at larger scales other risks (such as team [Coordination Risk](/tags/Coordination-Risk)) increase too fast for it to work.
If the methodology _fails at a particular scale_ this tells you something about the risks that the methodology isn't addressing. One of the things Risk-First explores is trying to place methodologies and practices within a framework to say _when_ they are applicable.
@@ -73,7 +73,7 @@ In the previous section on [Health](Health.md) we looked at how risk management
In 2020 the world was plunged into pandemic. Everything changed very quickly, including the nature of software development. Lots of the practices we'd grown used to (such as XP's small, co-located teams) had to be jettisoned and replaced with Zoom calls and instant messaging apps. This was a very sudden, rapid change in the technology we use to do our jobs, but in a more general sense we need to understand that Agile, XP and Scrum were invented at the turn of the 21st century. The [Lean Manufacturing](https://en.wikipedia.org/wiki/Lean_manufacturing) movement originated post-WW2.
-The general ideas they espouse have stood the test of time but where they recommend particular technologies things are looking more shaky. [Pair Programming](../practices/Glossary-Of-Practices.md#pair-programming) where two developers share the same keyboard doesn't work so well anymore. However, it can be made to work over video conferencing and when we all move to augmented reality headsets perhaps there will be another configuration of this. We can now do Pair Programming with our artificial intelligence "co-pilots" - but is that managing the same risks?
+The general ideas they espouse have stood the test of time but where they recommend particular technologies things are looking more shaky. [Pair Programming](/practices/Glossary-Of-Practices.md#pair-programming) where two developers share the same keyboard doesn't work so well anymore. However, it can be made to work over video conferencing and when we all move to augmented reality headsets perhaps there will be another configuration of this. We can now do Pair Programming with our artificial intelligence "co-pilots" - but is that managing the same risks?
The point I am making here is that while there are [technology tools to support risk management](Track-Risk.md) the idea itself is not wedded to a particular technology, culture or way of working. And, it is as old as the hills.
diff --git a/docs/thinking/De-Risking.md b/docs/thinking/De-Risking.md
index 0901ce10d..039eb7273 100644
--- a/docs/thinking/De-Risking.md
+++ b/docs/thinking/De-Risking.md
@@ -61,9 +61,9 @@ The table above lists a set of _generic strategies_ for derisking which we'll lo
1. **Do Risky Things Early**: If you are building some software process which has ten steps in it, and the 9th step has a high probability of not being implementable, then _build the 9th step first_. If you succeed, you've massively reduced the risk of the process construction. IF you fail, you'll only have lost the time it took to build that one step. _Build a proof of concept_.
-1. **Take Care With Dependencies**: Choose popular technologies and known reliable components. Whilst hiring people is hard work at the best of times, hiring PL/1 programmers is _really hard_. This tactic is explored in much more depth in [Software Dependency Risk](../risks/Software-Dependency-Risk.md)
+1. **Take Care With Dependencies**: Choose popular technologies and known reliable components. Whilst hiring people is hard work at the best of times, hiring PL/1 programmers is _really hard_. This tactic is explored in much more depth in [Software Dependency Risk](/tags/Software-Dependency-Risk)
-1. **Redundancy**: Avoid single points of failure. For example, Pair Programming is a control espoused by [Extreme Programming](../practices/Agile.md#extreme-programming) to reduce [Key Person Risk](../risks/Agency-Risk.md) and [Communication Risk](../risks/Communication-Risk.md). See [Dependency Risk](../risks/Dependency-Risk.md) for more on this.
+1. **Redundancy**: Avoid single points of failure. For example, Pair Programming is a control espoused by [Extreme Programming](/tags/Extreme-Programming-(XP)) to reduce [Key Person Risk](/tags/Agency-Risk) and [Communication Risk](/tags/Comunication-Risk). See [Dependency Risk](/tags/Dependency-Risk) for more on this.
1. **Create Options**: Using _feature flags_ allows you to turn off functionality in production, avoiding an all-or-nothing commitment. Working in branches gives the same optionality while developing.
@@ -97,17 +97,17 @@ The table above lists a set of _generic strategies_ for derisking which we'll lo
## Avoid
-**Avoiding** risk, means taking a route on the [Risk Landscape](../thinking/Glossary.md#risk-landscape) _around_ the risk. Neither the stakes or the payoff are changed.
+**Avoiding** risk, means taking a route on the [Risk Landscape](/thinking/Glossary.md#risk-landscape) _around_ the risk. Neither the stakes or the payoff are changed.
### General Examples
- **Avoiding flying** means that you're not going to be killed in a plane crash. However, you also lose the benefits that flying affords.
- - **Don't Launch a SaaS**: _Not_ launching an online service _avoids_ the [Operational Risk](../risks/Operational-Risk.md) involved in running one. Although you'll need to look for some other way to make a living.
+ - **Don't Launch a SaaS**: _Not_ launching an online service _avoids_ the [Operational Risk](/tags/Operational-Risk) involved in running one. Although you'll need to look for some other way to make a living.
### Specific Tactics
-1. **Avoid Unfamiliar Technologies**: If you are working in a team which has no experience of relational databases, then storing data in files _might_ be a way to avoid the [Learning Curve Risk](../risks/Communication-Risk.md#learning-curve-risk) associated with this technology. Of course, you may pick up other, more serious risks as a result: Relational Databases are software solutions to many kinds of [Coordination Risk](../risks/Coordination-Risk.md) problem, such as concurrent writes or consistency.
+1. **Avoid Unfamiliar Technologies**: If you are working in a team which has no experience of relational databases, then storing data in files _might_ be a way to avoid the [Learning Curve Risk](/tags/Learning-Curve-Risk) associated with this technology. Of course, you may pick up other, more serious risks as a result: Relational Databases are software solutions to many kinds of [Coordination Risk](/tags/Coordination-Risk) problem, such as concurrent writes or consistency.
2. **Do Your Research**: If you're not clear about the risks of a particular decision up front, it can be hard to avoid them. Although, some of the biggest breakthroughs come from people _not_ following this advice such as the Wright Brothers inventing powered flight and Percy Spencer inventing the microwave oven. Don't spend your life avoiding all risks.
@@ -129,13 +129,13 @@ The table above lists a set of _generic strategies_ for derisking which we'll lo
### Specific Tactics
-1. **Software as a Service**: [Software-as-a-Service (SaaS)](../risks/Software-Dependency-Risk.md) is an example of transferring risk, since another party is responsible for making sure the service is up-and-running, backed up, etc.
+1. **Software as a Service**: [Software-as-a-Service (SaaS)](/tags/Software-Dependency-Risk) is an example of transferring risk, since another party is responsible for making sure the service is up-and-running, backed up, etc.
1. **Employ Good People**: Having staff is a great way to share risk, whether you are a firm or a team. The employee takes care of some of the risk for you. In return, you're paying them a wage which helps them manage their own risks. This is the time-tested, win-win symbiosis of a good trade.
1. **Escalating**: If your team is receiving poor service from a supplier it might be in your interests to share this risk with say the legal department or procurement team.
-1. **Taking Responsibility**: If your firm is struggling to deal with a certain risk, why not become the expert and make yourself indispensable? In the section on [Process Risk](../risks/Process-Risk.md) we'll be looking at how this can happen organically within a company.
+1. **Taking Responsibility**: If your firm is struggling to deal with a certain risk, why not become the expert and make yourself indispensable? In the section on [Process Risk](/tags/Process-Risk) we'll be looking at how this can happen organically within a company.
1. **Delegating Responsibility**: Putting people in charge of specific risks shares or transfers the responsibility away from you. Note that inside organisations, transfer of risk can become a political game:
@@ -153,7 +153,7 @@ The table above lists a set of _generic strategies_ for derisking which we'll lo
- **War** is a risk that is usually accepted by businesses. You're unlikely to be able to buy insurance against this.
-- **Key Staff**: Having a super-star on the team is risky as they might leave. But sometimes they are a risk worth accepting because of the value they bring. This is covered in more detail in [Staff Risk](../risks/Scarcity-Risk.md#staff-risk).
+- **Key Staff**: Having a super-star on the team is risky as they might leave. But sometimes they are a risk worth accepting because of the value they bring. This is covered in more detail in [Staff Risk](/tags/Staff-Risk).
### Specific Tactics
@@ -183,7 +183,7 @@ There is a grey area here, because on the one hand you are [retaining](#retain)
1. **Emergency Funds**: Setting aside sufficient money to deal with a risk if it occurs.
-1. **Slack**: Accepting that sometimes tasks run long and building this into the plan. [Schedule Risk](../risks/Scarcity-Risk.md#schedule-risk) examines in detail how this works.
+1. **Slack**: Accepting that sometimes tasks run long and building this into the plan. [Schedule Risk](/tags/Schedule-Risk) examines in detail how this works.
## Monitor
@@ -199,7 +199,7 @@ There is a grey area here, because on the one hand you are [retaining](#retain)
## Specific Tactics
-1. **Establish Metrics** that allow you to observe the performance of the systems you build. [Map and Territory Risk](../risks/Map-And-Territory-Risk.md) covers this in more detail.
+1. **Establish Metrics** that allow you to observe the performance of the systems you build. [Map and Territory Risk](/tags/Map-And-Territory-Risk) covers this in more detail.
1. **Second opinions** and **audits** correct for errors in monitoring by people who can be too close to the problem.
diff --git a/docs/thinking/Development-Process.md b/docs/thinking/Development-Process.md
index 02dde1720..2ee859307 100644
--- a/docs/thinking/Development-Process.md
+++ b/docs/thinking/Development-Process.md
@@ -19,7 +19,7 @@ tweet: yes
# Analysing The Development Process
-In [A Simple Scenario](A-Simple-Scenario.md) we introduced some terms for talking about risk (such as [Attendant Risk](../thinking/Glossary.md#attendant-risk), [Hidden Risk](../thinking/Glossary.md#attendant-risk) and the [Internal Model](../thinking/Glossary.md#internal-model)).
+In [A Simple Scenario](A-Simple-Scenario.md) we introduced some terms for talking about risk (such as [Attendant Risk](/thinking/Glossary.md#attendant-risk), [Hidden Risk](/thinking/Glossary.md#attendant-risk) and the [Internal Model](/thinking/Glossary.md#internal-model)).
We've also introduced a notation in the form of [Risk-First Diagrams](./Risk-First-Diagrams.md) which allows us to represent the ways in which we can change the risks by [Taking Action](./Glossary.md#taking-action).
@@ -70,12 +70,12 @@ We can all see this might end in disaster, but why?
Two reasons:
-1. You're [Meeting Reality](../thinking/Glossary.md#meet-reality) all-in-one-go: all of these risks materialize at the same time, and you have to deal with them all at once.
-2. Because of this, at the point you put code into the hands of your users, your [Internal Model](../thinking/Glossary.md#internal-model) is at its least-developed. All the [Hidden Risks](../thinking/Glossary.md#hidden-risk) now need to be dealt with at the same time, in production.
+1. You're [Meeting Reality](/thinking/Glossary.md#meet-reality) all-in-one-go: all of these risks materialize at the same time, and you have to deal with them all at once.
+2. Because of this, at the point you put code into the hands of your users, your [Internal Model](/thinking/Glossary.md#internal-model) is at its least-developed. All the [Hidden Risks](/thinking/Glossary.md#hidden-risk) now need to be dealt with at the same time, in production.
## Applying the Toy Process
-Let's look at how our toy process should act to prevent these risks materializing by considering an unhappy path. One where, at the outset, we have lots of [Hidden Risks](../thinking/Glossary.md#hidden-risk). Let's say a particularly vocal user rings up someone in the office and asks for new **Feature X** to be added to the software. It's logged as a new feature request, but:
+Let's look at how our toy process should act to prevent these risks materializing by considering an unhappy path. One where, at the outset, we have lots of [Hidden Risks](/thinking/Glossary.md#hidden-risk). Let's say a particularly vocal user rings up someone in the office and asks for new **Feature X** to be added to the software. It's logged as a new feature request, but:
- Unfortunately, this feature once programmed will break an existing **Feature Y**.
- Implementing the feature will use some api in a library, which contains bugs and have to be coded around.
@@ -87,38 +87,38 @@ Let's look at how our toy process should act to prevent these risks materializin
The diagram above shows how this plays out.
-This is a slightly contrived example, as you'll see. But let's follow our feature through the process and see how it meets reality slowly, and the [Hidden Risks](../thinking/Glossary.md#hidden-risk) are discovered:
+This is a slightly contrived example, as you'll see. But let's follow our feature through the process and see how it meets reality slowly, and the [Hidden Risks](/thinking/Glossary.md#hidden-risk) are discovered:
### Specification
-The first stage of the journey for the feature is that it meets the Business Analyst (BA). The _purpose_ of the BA is to examine new goals for the project and try to integrate them with _reality as they understand it_. A good BA might take a feature request and vet it against his [Internal Model](../thinking/Glossary.md#internal-model), saying something like:
+The first stage of the journey for the feature is that it meets the Business Analyst (BA). The _purpose_ of the BA is to examine new goals for the project and try to integrate them with _reality as they understand it_. A good BA might take a feature request and vet it against his [Internal Model](/thinking/Glossary.md#internal-model), saying something like:
- "This feature doesn't belong on the User screen, it belongs on the New Account screen"
- "90% of this functionality is already present in the Document Merge Process"
- "We need a control on the form that allows the user to select between Internal and External projects"
-In the process of doing this, the BA is turning the simple feature request _idea_ into a more consistent, well-explained _specification_ or _requirement_ which the developer can pick up. But why is this a useful step in our simple methodology? From the perspective of our [Internal Model](../thinking/Glossary.md#internal-model), we can say that the BA is responsible for:
+In the process of doing this, the BA is turning the simple feature request _idea_ into a more consistent, well-explained _specification_ or _requirement_ which the developer can pick up. But why is this a useful step in our simple methodology? From the perspective of our [Internal Model](/thinking/Glossary.md#internal-model), we can say that the BA is responsible for:
-- Trying to surface [Hidden Risks](../thinking/Glossary.md#hidden-risk)
-- Trying to evaluate [Attendant Risks](../thinking/Glossary.md#attendant-risk) and make them clear to everyone on the project.
+- Trying to surface [Hidden Risks](/thinking/Glossary.md#hidden-risk)
+- Trying to evaluate [Attendant Risks](/thinking/Glossary.md#attendant-risk) and make them clear to everyone on the project.
![BA Specification: exposing Hidden Risks as soon as possible](/img/generated/introduction/development_process_ba.png)
In surfacing these risks, there is another outcome: while **Feature X** might be flawed as originally presented, the BA can "evolve" it into a specification and tie it down sufficiently to reduce the risks. The BA does all this by simply _thinking about it_, _talking to people_ and _writing stuff down_.
-This process of evolving the feature request into a requirement is the BA's job. From our Risk-First perspective, it is _taking an idea and making it [Meet Reality](../thinking/Glossary.md#meet-reality)_. Not the _full reality_ of production (yet), but something more limited.
+This process of evolving the feature request into a requirement is the BA's job. From our Risk-First perspective, it is _taking an idea and making it [Meet Reality](/thinking/Glossary.md#meet-reality)_. Not the _full reality_ of production (yet), but something more limited.
### Code And Unit Test
-The next stage for our feature, **Feature X** is that it gets coded and some tests get written. Let's look at how our [Goal](../thinking/Glossary.md#goal) meets a new reality: this time it's the reality of a pre-existing codebase, which has it's own internal logic.
+The next stage for our feature, **Feature X** is that it gets coded and some tests get written. Let's look at how our [Goal](/thinking/Glossary.md#goal) meets a new reality: this time it's the reality of a pre-existing codebase, which has it's own internal logic.
-As the developer begins coding the feature in the software, they will start with an [Internal Model](../thinking/Glossary.md#internal-model) of the software, and how the code fits into it. But, in the process of implementing it, they are likely to learn about the codebase, and their [Internal Model](../thinking/Glossary.md#internal-model) will develop.
+As the developer begins coding the feature in the software, they will start with an [Internal Model](/thinking/Glossary.md#internal-model) of the software, and how the code fits into it. But, in the process of implementing it, they are likely to learn about the codebase, and their [Internal Model](/thinking/Glossary.md#internal-model) will develop.
![Coding Process: exposing more hidden risks as you code](/img/generated/introduction/development_process_code.png)
-At this point, let's review the visual grammar of the diagram above. Here, we're showing how the balance of risks will change if the developer [Takes Action](../thinking/Glossary.md#taking-action) and writes some code. On the left, we have the current state of the world, on the right is the anticipated state _after_ taking the action.
+At this point, let's review the visual grammar of the diagram above. Here, we're showing how the balance of risks will change if the developer [Takes Action](/thinking/Glossary.md#taking-action) and writes some code. On the left, we have the current state of the world, on the right is the anticipated state _after_ taking the action.
-The round-cornered rectangles represent our [Internal Model](../thinking/Glossary.md#internal-model), and these contain our view of [Risk](../thinking/Glossary.md#risk), whether the risks we face right now, or the [Attendant Risks](../thinking/Glossary.md#attendant-risk) expected after taking the action. We're not at the stage where taking this actions is _completing_ the goal. In fact, arguably, we're facing _worse_ risks after taking action than before, since we now have _development difficulties_ to contend with!
+The round-cornered rectangles represent our [Internal Model](/thinking/Glossary.md#internal-model), and these contain our view of [Risk](/thinking/Glossary.md#risk), whether the risks we face right now, or the [Attendant Risks](/thinking/Glossary.md#attendant-risk) expected after taking the action. We're not at the stage where taking this actions is _completing_ the goal. In fact, arguably, we're facing _worse_ risks after taking action than before, since we now have _development difficulties_ to contend with!
But at least, taking the action of "coding and unit testing" is expected to mitigate the risk of "Duplicating Functionality".
@@ -132,11 +132,11 @@ So, within this example process, this stage is about meeting a new reality: the
![Integration testing exposes Hidden Risks before you get to production](/img/generated/introduction/development_process_integration.png)
-As shown in the diagram above, at this stage we might discover the [Hidden Risk](../thinking/Glossary.md#hidden-risk) that we'd break **Feature Y**
+As shown in the diagram above, at this stage we might discover the [Hidden Risk](/thinking/Glossary.md#hidden-risk) that we'd break **Feature Y**
### User Acceptance Test
-Next, User Acceptance Testing (UAT) is where our new feature meets another reality: _actual users_. I think you can see how the process works by now. We're just flushing out yet more [Hidden Risks](../thinking/Glossary.md#hidden-risk).
+Next, User Acceptance Testing (UAT) is where our new feature meets another reality: _actual users_. I think you can see how the process works by now. We're just flushing out yet more [Hidden Risks](/thinking/Glossary.md#hidden-risk).
![UAT - putting tame users in front of your software is better than real ones, where the risk is higher ](/img/generated/introduction/development_process_uat.png)
@@ -144,16 +144,16 @@ Next, User Acceptance Testing (UAT) is where our new feature meets another reali
Here are a few quick observations about managing risk which you are revealed both by this toy software process and also our previous example of [The Dinner Party](A-Simple-Scenario.md):
- - [Taking Action](../thinking/Glossary.md#taking-action) is the _only_ way to create change in the world.
- - It's also the only way we can _learn_ about the world, adding to our [Internal Model](../thinking/Glossary.md#internal-model).
- - In this case, we discover a [Hidden Risk](../thinking/Glossary.md#hidden-risk): the user's difficulty in finding the feature.
+ - [Taking Action](/thinking/Glossary.md#taking-action) is the _only_ way to create change in the world.
+ - It's also the only way we can _learn_ about the world, adding to our [Internal Model](/thinking/Glossary.md#internal-model).
+ - In this case, we discover a [Hidden Risk](/thinking/Glossary.md#hidden-risk): the user's difficulty in finding the feature.
- In return, we can _expect_ the process of performing the UAT to delay our release (this is an attendant schedule risk).
## Major Themes
So, what does this kind of Risk-First analysis tell us about _development processes in general_? Below are four conclusions you can take away from the chapter, but which are all major themes of Risk-First that we'll be developing later:
-**First**, the people who set up the development process _didn't know_ about these _exact_ risks, but they knew the _shape that the risks take_. The process builds "nets" for the different kinds of [Hidden Risks](../thinking/Glossary.md#hidden-risk) without knowing exactly what they are. In order to build these nets, we have to be able to categorise the types of risk we face. This is something we'll look at in the [Risks](../risks/Start.md) part of Risk-First.
+**First**, the people who set up the development process _didn't know_ about these _exact_ risks, but they knew the _shape that the risks take_. The process builds "nets" for the different kinds of [Hidden Risks](/thinking/Glossary.md#hidden-risk) without knowing exactly what they are. In order to build these nets, we have to be able to categorise the types of risk we face. This is something we'll look at in the [Risks](/risks/Start.md) part of Risk-First.
**Second**, are these really risks, or are they _problems we just didn't know about_? I am using the terms interchangeably, to a certain extent. Even when you know you have a problem, it's still a risk to your deadline until it's solved. So, when does a risk become a problem? Is a problem still just a schedule-risk, or cost-risk? We'll come back to this question soon.
diff --git a/docs/thinking/Evaluating-Risk.md b/docs/thinking/Evaluating-Risk.md
index 1dfd6e660..c338947f7 100644
--- a/docs/thinking/Evaluating-Risk.md
+++ b/docs/thinking/Evaluating-Risk.md
@@ -94,12 +94,12 @@ Enough with the numbers and the theory: we need a practical framework, rather t
- First, there isn't enough scientific evidence for an approach like this. We can look at collected data about historic IT projects, but techniques and tools advance rapidly.
- Second, IT projects have too many confounding factors, such as experience of the teams,
technologies, used, problem domain, clients etc. That is, the risks faced by IT projects are _too diverse_ and _hard to quantify_ to allow for meaningful comparison from one to the next.
-- Third, as soon as you _publish a date_ it changes the expectations of the project (see [Student Syndrome](../risks/Scarcity-Risk.md#student-syndrome)).
-- Fourth, metrics get [misused](../risks/Map-And-Territory-Risk.md) and [gamed](../risks/Agency-Risk.md).
+- Third, as soon as you _publish a date_ it changes the expectations of the project (see [Student Syndrome](/risks/Scarcity-Risk.md#student-syndrome)).
+- Fourth, metrics get [misused](/tags/Map-And-Territory-Risk) and [gamed](/tags/Agency-Risk).
## Discounting In A Crisis
-Reality is messy. Dressing it up with numbers doesn't change that and you risk [fooling yourself](../risks/Map-And-Territory-Risk.md). If this is the case, is there any hope at all in what we're doing? Yes: _forget precision_. You should, with experience, be able to hold up two separate risks and answer the question, "is this one bigger than this one?"
+Reality is messy. Dressing it up with numbers doesn't change that and you risk [fooling yourself](/tags/Map-And-Territory-Risk). If this is the case, is there any hope at all in what we're doing? Yes: _forget precision_. You should, with experience, be able to hold up two separate risks and answer the question, "is this one bigger than this one?"
Lots of projects start with good intentions. Carefully evaluating the risks of your actions or inaction is great when the going is good. But then when the project is hit with delays everything goes out of the window.
diff --git a/docs/thinking/Glossary.md b/docs/thinking/Glossary.md
index ea5848ea1..c990bd7fd 100644
--- a/docs/thinking/Glossary.md
+++ b/docs/thinking/Glossary.md
@@ -20,13 +20,13 @@ tweet: yes
The process of removing physical, spatial, or temporal details or attributes in the study of objects or systems in order to more closely attend to other details of interest.
-_See: [Complexity Risk](../risks/Complexity-Risk.md)_
+_See: [Complexity Risk](/tags/Complexity-Risk)_
### Agent
Agency is the capacity of an actor to act in a given environment. We use the term _agent_ to refer to any process, person, system or organisation with agency.
-_See: [Agency Risk](../risks/Agency-Risk.md)_
+_See: [Agency Risk](/tags/Agency-Risk)_
### Anti-Goal
@@ -70,7 +70,7 @@ _See: [Feedback Loop (Tag)](/tags/feedback-loop)_
### Goal
-A picture of the future that an individual or team carries within their [Internal Model](#internal-model); An imagined destination on the [Risk Landscape](#risk-landscape). A specific [Upside Risk](#upside-risk) we'd like to nurture and realize. _See: [A Simple Scenario](../thinking/A-Simple-Scenario.md)_
+A picture of the future that an individual or team carries within their [Internal Model](#internal-model); An imagined destination on the [Risk Landscape](#risk-landscape). A specific [Upside Risk](#upside-risk) we'd like to nurture and realize. _See: [A Simple Scenario](/thinking/A-Simple-Scenario.md)_
_See: [Goal (Tag)](/tags/goal)_
@@ -132,7 +132,7 @@ _See: [Hidden Risk (Tag)](/tags/hidden-risk)_
Risks that, as a result of [Taking Action](#taking-action) have been minimized.
-_See: [De-Risking](../thinking/De-Risking.md)_
+_See: [De-Risking](/thinking/De-Risking.md)_
#### Upside Risk
diff --git a/docs/thinking/Health.md b/docs/thinking/Health.md
index 5097e69d8..98534f8a0 100644
--- a/docs/thinking/Health.md
+++ b/docs/thinking/Health.md
@@ -26,11 +26,11 @@ I am going to argue here that _risks_ affect the health of a thing, where the th
- **A Living Organism**, such as the human body, which is exposed to _health risks_, such as [Cardiovascular Disease](https://en.wikipedia.org/wiki/Cardiovascular_disease).
- - **A Software Product** is a thing we interact with, built out of code. The health of that software is damaged by the existence of [bugs and missing features](../risks/Feature-Risk.md).
+ - **A Software Product** is a thing we interact with, built out of code. The health of that software is damaged by the existence of [bugs and missing features](/tags/Feature-Risk).
- **A Project**, like making a film or organising a [dinner party](A-Simple-Scenario.md).
- - **A Commercial Entity**, such as a business, which is exposed to various [Operational Risks](../risks/Operational-Risk.md) in order to continue to function. Businesses face different health risks than organisms, like key staff leaving, reputation damage or running out of money.
+ - **A Commercial Entity**, such as a business, which is exposed to various [Operational Risks](/tags/Operational-Risk) in order to continue to function. Businesses face different health risks than organisms, like key staff leaving, reputation damage or running out of money.
- On a larger scale, **a State** is a system, the health of which is threatened by various existential risks such as revolution, climate change or nuclear attack.
@@ -117,9 +117,9 @@ The examples we've looked at so far are all at different scales, and could be ne
When an organisation or a project hires an employee they are doing so in order to improve their health: make sales, fix bugs, clean the office and so on. This is a symbiotic relationship - the health of the organisation is related to the health of the employee. The health of staff, projects, departments, firms are all related. You might be working on a software product for a team inside an organisation operating in a certain country. You are probably going to have to consider the health of more than one of those things. Can a team be "healthy" if the organisation it is contained within is dying? Probably not.
-Sometimes, as discussed in [Agency Risk](../risks/Agency-Risk.md) these can be in conflict with one another:
+Sometimes, as discussed in [Agency Risk](/tags/Agency-Risk) these can be in conflict with one another:
- - Putting in [a heroic effort](../risks/Agency-Risk.md#the-hero) might save a project but at the expense of your personal health.
+ - Putting in [a heroic effort](/risks/Agency-Risk.md#the-hero) might save a project but at the expense of your personal health.
- [Lobbying](https://en.wikipedia.org/wiki/Lobbying) is trying to push the political agenda of an organisation at the state level, which might help the health of the organisation at the expense of the state or its citizens.
@@ -129,7 +129,7 @@ Sometimes, as discussed in [Agency Risk](../risks/Agency-Risk.md) these can be i
If all of these disparate domains at all of these different scales are tracking health risks, it is clear that we should be doing this for software projects too.
-The health risks affecting people are well known (by doctors, at least) and we have the list of state-level risks above too. [Risk-First](https://riskfirst.org) is therefore about building a similar catalog for risks affecting the health of software development projects. Risks are in general _not_ unique on software projects - they are the same ones over and over again, such as [Communication Risk](../risks/Communication-Risk.md) or [Dependency Risk](../risks/Dependency-Risk.md). Every project faces these.
+The health risks affecting people are well known (by doctors, at least) and we have the list of state-level risks above too. [Risk-First](https://riskfirst.org) is therefore about building a similar catalog for risks affecting the health of software development projects. Risks are in general _not_ unique on software projects - they are the same ones over and over again, such as [Communication Risk](/tags/Comunication-Risk) or [Dependency Risk](/tags/Dependency-Risk). Every project faces these.
Having shown that risk management is _scale invariant_, we're next going to look at general strategies we can use to manage all of these various health risks.
diff --git a/docs/thinking/Just-Risk.md b/docs/thinking/Just-Risk.md
index 359d982ab..f6d20efc8 100644
--- a/docs/thinking/Just-Risk.md
+++ b/docs/thinking/Just-Risk.md
@@ -58,18 +58,18 @@ This _hints_ at the fact that at some level it's all about risk:
The reason you are [taking an action](Glossary.md#taking-action) is to manage a risk. For example:
- - If you're coding up new features in the software, this is managing [Feature Risk](../risks/Feature-Risk.md) (which we'll explore in more detail later).
- - If you're getting a business sign-off for something, this is managing the risk of everyone not agreeing on a course of action (a [Coordination Risk](../risks/Coordination-Risk.md)).
- - If you're writing a test, then that's managing a type of [Implementation Risk](../risks/Feature-Risk.md#implementation-risk).
+ - If you're coding up new features in the software, this is managing [Feature Risk](/tags/Feature-Risk) (which we'll explore in more detail later).
+ - If you're getting a business sign-off for something, this is managing the risk of everyone not agreeing on a course of action (a [Coordination Risk](/tags/Coordination-Risk)).
+ - If you're writing a test, then that's managing a type of [Implementation Risk](/tags/Implementation-Risk).
## Every Action Has Attendant Risk
- How do you know if the action will get completed?
- Will it overrun, or be on time?
- Will it lead to yet more actions?
-- What [Hidden Risk](../thinking/Glossary.md#hidden-risk) will it uncover?
+- What [Hidden Risk](/thinking/Glossary.md#hidden-risk) will it uncover?
-Consider _coding a feature_. The whole process of coding is an exercise in learning what we didn't know about the world, uncovering problems and improving our [Internal Model](../thinking/Glossary.md#internal-model). That is, flushing out the [Attendant Risk](../thinking/Glossary.md#attendant-risk) of the [Goal](../thinking/Glossary.md#goal).
+Consider _coding a feature_. The whole process of coding is an exercise in learning what we didn't know about the world, uncovering problems and improving our [Internal Model](/thinking/Glossary.md#internal-model). That is, flushing out the [Attendant Risk](/thinking/Glossary.md#attendant-risk) of the [Goal](/thinking/Glossary.md#goal).
And, as we saw in the [Introduction](A-Simple-Scenario.md), even something _mundane_ like the Dinner Party had risks.
@@ -100,7 +100,7 @@ Let's look at a real-life example. The above image shows a selection of issues
The above diagram is an idealised example of this, showing how we take action to address the risks and goals on the left and end up with new risks on the right.
-[Goals](../thinking/Glossary.md#goal) live inside our [Internal Model](../thinking/Glossary.md#internal-model), just like Risks. Functionally, Goals and Risks are equivalent. For example, the Goal of "Implementing Feature X" is equivalent to mitigating "Risk of Feature X not being present".
+[Goals](/thinking/Glossary.md#goal) live inside our [Internal Model](/thinking/Glossary.md#internal-model), just like Risks. Functionally, Goals and Risks are equivalent. For example, the Goal of "Implementing Feature X" is equivalent to mitigating "Risk of Feature X not being present".
Let's try and back up that assertion with a few more examples:
@@ -110,7 +110,7 @@ Let's try and back up that assertion with a few more examples:
| Risk of looking technically inferior during the cold war | Feeling of technical superiority | Land a man on the moon |
| Risk of the market not requiring your skills | Job security | Retrain |
-There is a certain "interplay" between the concepts of risks, actions and goals. On the [Risk Landscape](../thinking/Glossary.md#risk-landscape), goals and risks correspond to starting points and destinations, whilst the action is moving on the risk landscape.
+There is a certain "interplay" between the concepts of risks, actions and goals. On the [Risk Landscape](/thinking/Glossary.md#risk-landscape), goals and risks correspond to starting points and destinations, whilst the action is moving on the risk landscape.
| **Starting Point** | **Movement** | **End Point** |
|--------------------|--------------|--------------------------------|
diff --git a/docs/thinking/Meeting-Reality.md b/docs/thinking/Meeting-Reality.md
index 64da1692d..0ba4aab1f 100644
--- a/docs/thinking/Meeting-Reality.md
+++ b/docs/thinking/Meeting-Reality.md
@@ -33,13 +33,13 @@ The world is too complex to understand at a glance. It takes years of growth an
Within a development team, the model is split amongst people, documents, email, tickets, code... but it is still a model.
-This "[Internal Model](../thinking/Glossary.md#internal-model)" of reality informs the actions we take in life: we take actions based on our model, hoping to change reality with some positive outcome.
+This "[Internal Model](/thinking/Glossary.md#internal-model)" of reality informs the actions we take in life: we take actions based on our model, hoping to change reality with some positive outcome.
![Taking actions changes reality, but changes your model of the risks too](/img/generated/introduction/model_vs_reality_2.png)
For example, while [organising a dinner party](A-Simple-Scenario.md) you'll have a model of who you expect to come. You might take actions to ensure there is enough food, that you've got RSVPs and so on.
-The actions we take have consequences in the real world. Hopefully, we eliminate some known risks but we might expose new [hidden risks](../thinking/Glossary.md#hidden-risk) as we go. There is a _recursive_ nature about this - we're left with an updated Internal Model, and we see new actions we have to take as a result.
+The actions we take have consequences in the real world. Hopefully, we eliminate some known risks but we might expose new [hidden risks](/thinking/Glossary.md#hidden-risk) as we go. There is a _recursive_ nature about this - we're left with an updated Internal Model, and we see new actions we have to take as a result.
## Navigating the "Risk Landscape"
@@ -51,7 +51,7 @@ I would argue that the best choice of what to do is the one has the greatest [Pa
![Navigating The Risk Landscape](/img/generated/introduction/risk_landscape_1.png)
-You can think of [Taking Action](../thinking/Glossary.md#taking-action) as moving your project on a "[Risk Landscape](Glossary.md#risk-landscape)". Ideally, when you take an action, you move from some place with worse risk to somewhere more favourable, as shown in the diagram above.
+You can think of [Taking Action](/thinking/Glossary.md#taking-action) as moving your project on a "[Risk Landscape](Glossary.md#risk-landscape)". Ideally, when you take an action, you move from some place with worse risk to somewhere more favourable, as shown in the diagram above.
Now, that's easier said than done! Sometimes, you can end up somewhere _worse_: the action you took to manage a risk has made things worse. Almost certainly, this will have been due to a hidden risk that you weren't aware of when you embarked on the action, otherwise you'd not have chosen it.
@@ -61,17 +61,17 @@ Now, that's easier said than done! Sometimes, you can end up somewhere _worse_:
_Automating processes_ (as shown in the diagram above) is often tempting: it _should_ save time, and reduce the amount of boring, repetitive work on a project. But sometimes, it turns into an industry in itself, consumes more effort than it'll ever pay back and needs to be maintained in the future at great expense.
-One popular type of automation is [Unit Testing](../practices/Glossary-Of-Practices.md#unit-testing). Writing unit tests adds to the amount of development work, so on its own, it _uses up time from the schedule_. It also creates complexity - you now have more code to manage. However, if you write _just enough_ of the right unit tests, you should be short-cutting the time spent finding issues in the User Acceptance Testing (UAT) stage, so you're hopefully trading off a larger [Schedule Risk](../risks/Scarcity-Risk.md#schedule-risk) from UAT and adding a smaller [Schedule Risk](../risks/Scarcity-Risk.md#schedule-risk) to Development.
+One popular type of automation is [Unit Testing](/practices/Glossary-Of-Practices.md#unit-testing). Writing unit tests adds to the amount of development work, so on its own, it _uses up time from the schedule_. It also creates complexity - you now have more code to manage. However, if you write _just enough_ of the right unit tests, you should be short-cutting the time spent finding issues in the User Acceptance Testing (UAT) stage, so you're hopefully trading off a larger [Schedule Risk](/tags/Schedule-Risk) from UAT and adding a smaller [Schedule Risk](/tags/Schedule-Risk) to Development.
### Example: MongoDB
On a previous project in a bank we had a requirement to store a modest amount of data and we needed to be able to retrieve it fast. The developer chose to use [MongoDB](https://www.mongodb.com) for this. At the time, others pointed out that other teams in the bank had had lots of difficulty deploying MongoDB internally, due to licensing issues and other factors internal to the bank.
-Other options were available, but the developer chose MongoDB because of their _existing familiarity_ with it: therefore, they felt that the [Hidden Risks](../thinking/Glossary.md#hidden-risk) of MongoDB were _lower_ than the other options.
+Other options were available, but the developer chose MongoDB because of their _existing familiarity_ with it: therefore, they felt that the [Hidden Risks](/thinking/Glossary.md#hidden-risk) of MongoDB were _lower_ than the other options.
This turned out to be a mistake: the internal bureaucracy eventually proved too great and MongoDB had to be abandoned after much investment of time.
-This is not a criticism of MongoDB: it's simply a demonstration that sometimes, the cure is worse than the disease. Successful projects are _always_ trying to _reduce_ [Attendant Risks](../thinking/Glossary.md#attendant-risk).
+This is not a criticism of MongoDB: it's simply a demonstration that sometimes, the cure is worse than the disease. Successful projects are _always_ trying to _reduce_ [Attendant Risks](/thinking/Glossary.md#attendant-risk).
## The Cost Of Meeting Reality
@@ -111,9 +111,9 @@ The Risk-First diagram gives us two things. First, it makes this trade off clea
So, here we've looked at Meeting Reality, which basically boils down to taking actions to expose yourself to hidden risks and seeing how it turns out:
-- Each action you take is a step on the [Risk Landscape](../thinking/Glossary.md#risk-landscape), trading off one set of risks for another.
-- Each action exposes new [Hidden Risks](../thinking/Glossary.md#hidden-risk), changing your [Internal Model](../thinking/Glossary.md#internal-model).
-- Ideally, each action should reduce the overall [Attendant Risk](../thinking/Glossary.md#attendant-risk) on the project (that is, puts it in a better place on the [Risk Landscape](../thinking/Glossary.md#risk-landscape).
+- Each action you take is a step on the [Risk Landscape](/thinking/Glossary.md#risk-landscape), trading off one set of risks for another.
+- Each action exposes new [Hidden Risks](/thinking/Glossary.md#hidden-risk), changing your [Internal Model](/thinking/Glossary.md#internal-model).
+- Ideally, each action should reduce the overall [Attendant Risk](/thinking/Glossary.md#attendant-risk) on the project (that is, puts it in a better place on the [Risk Landscape](/thinking/Glossary.md#risk-landscape).
Could it be that _everything_ you do on a software project is risk management? This is an idea explored next in [Just Risk](Just-Risk.md).
diff --git a/docs/thinking/One-Size-Fits-No-One.md b/docs/thinking/One-Size-Fits-No-One.md
index c8323bf20..0920217e8 100644
--- a/docs/thinking/One-Size-Fits-No-One.md
+++ b/docs/thinking/One-Size-Fits-No-One.md
@@ -31,7 +31,7 @@ Therefore, it stands to reason that software methodologies are all about handlin
## Methodologies Surface Hidden Risks...
-Back in the [Development Process](Development-Process.md) section we introduced a toy software methodology that a development team might follow when building software. It included steps like _analysis_, _coding_ and _testing_. We looked at how the purpose of each of these actions was to manage risk in the software delivery process. For example, it doesn't matter if a developer doesn't know that he's going to break "Feature Y", because the _Integration Testing_ part of the methodology will expose this [hidden risk](../thinking/Glossary.md#hidden-risk) in the testing stage, rather than in let it surface in production (where it becomes more expensive).
+Back in the [Development Process](Development-Process.md) section we introduced a toy software methodology that a development team might follow when building software. It included steps like _analysis_, _coding_ and _testing_. We looked at how the purpose of each of these actions was to manage risk in the software delivery process. For example, it doesn't matter if a developer doesn't know that he's going to break "Feature Y", because the _Integration Testing_ part of the methodology will expose this [hidden risk](/thinking/Glossary.md#hidden-risk) in the testing stage, rather than in let it surface in production (where it becomes more expensive).
## ... But Replace Judgement
@@ -53,18 +53,18 @@ In this section, we're going to have a brief look at some different software met
Waterfall is a family of methodologies advocating a linear, stepwise approach to the processes involved in delivering a software system. The basic idea behind Waterfall-style methodologies is that the software process is broken into distinct stages, as shown in the diagram above. These usually include:
-- [Requirements Capture](../practices/Requirements-Capture.md)
-- [Specification](../practices/Design.md)
-- [Implementation](../practices/Development.md)
-- [Verification](../practices/Testing.md)
-- [Delivery](../practices/Delivery.md) and [Operations](../practices/Support.md)
-- [Sign Offs](../practices/Sign-Off.md) at each stage
+- [Requirements Capture](/tags/Requirements-Capture)
+- [Specification](tags/Design)
+- [Implementation](/tags/Coding)
+- [Verification](/tags/User-Acceptance-Testing)
+- [Delivery](/tags/Release) and [Operations](/tags/Support.md)
+- [Sign Offs](/tags/Approvals) at each stage
Because Waterfall methodologies are borrowed from _the construction industry_, they manage the risks that you would care about in a construction project. Specifically, minimising the risk of rework, and the risk of costs spiralling during the physical phase of the project. For example, pouring concrete is significantly easier than digging it out again after it sets.
![Waterfall, Specifications and Requirements Capture](/img/generated/introduction/waterfall2.png)
-Construction projects are often done by tender which means that the supplier will bid for the job of completing the project, and deliver it to a fixed price. This is a risk-management strategy for the client: they are transferring the risk of construction difficulties to the supplier, and avoiding the [Agency Risk](../risks/Agency-Risk.md) that the supplier will "pad" the project and take longer to implement it than necessary, charging them more in the process. In order for this to work, both sides need to have a fairly close understanding of what will be delivered, and this is why a specification is created.
+Construction projects are often done by tender which means that the supplier will bid for the job of completing the project, and deliver it to a fixed price. This is a risk-management strategy for the client: they are transferring the risk of construction difficulties to the supplier, and avoiding the [Agency Risk](/tags/Agency-Risk) that the supplier will "pad" the project and take longer to implement it than necessary, charging them more in the process. In order for this to work, both sides need to have a fairly close understanding of what will be delivered, and this is why a specification is created.
### The Wrong Risks?
@@ -98,23 +98,23 @@ Here are some high-level differences we see in some other popular methodologies:
- **[Project Management Body Of Knowledge (PMBoK)](https://en.wikipedia.org/wiki/Project_Management_Body_of_Knowledge)**. This is a formalisation of traditional project management practice. It prescribes best practices for managing scope, schedule, resources, communications, dependencies, stakeholders etc. on a project. Although "risk" is seen as a separate entity to be managed, all of the above areas are sources of risk within a project.
- - **[Scrum](https://en.wikipedia.org/wiki/Scrum)** is a popular Agile methodology. Arguably, it is less "extreme" than Extreme Programming, as it promotes a limited, more achievable set of agile practices, such as frequent releases, daily meetings, a product owner and retrospectives. This simplicity arguably makes it [simpler to learn and adapt to](../risks/Communication-Risk.md#learning-curve-risk) and probably contributes to Scrum's popularity over XP.
+ - **[Scrum](https://en.wikipedia.org/wiki/Scrum)** is a popular Agile methodology. Arguably, it is less "extreme" than Extreme Programming, as it promotes a limited, more achievable set of agile practices, such as frequent releases, daily meetings, a product owner and retrospectives. This simplicity arguably makes it [simpler to learn and adapt to](/tags/Learning-Curve-Risk) and probably contributes to Scrum's popularity over XP.
- - **[DevOps](https://en.wikipedia.org/wiki/DevOps)**. Many software systems struggle at the [boundary](../risks/Boundary-Risk.md) between "in development" and "in production". DevOps is an acknowledgement of this, and is about more closely aligning the feedback loops between the developers and the production system. It champions activities such as continuous deployment, automated releases and automated monitoring.
+ - **[DevOps](https://en.wikipedia.org/wiki/DevOps)**. Many software systems struggle at the [boundary](/tags/Boundary-Risk) between "in development" and "in production". DevOps is an acknowledgement of this, and is about more closely aligning the feedback loops between the developers and the production system. It champions activities such as continuous deployment, automated releases and automated monitoring.
-While this is a limited set of examples, you should be able to observe that the [actions](../thinking/Glossary.md#taking-action) promoted by a methodology are contingent on the risks it considers important.
+While this is a limited set of examples, you should be able to observe that the [actions](/thinking/Glossary.md#taking-action) promoted by a methodology are contingent on the risks it considers important.
## Effectiveness
> "All methodologies are based on fear. You try to set up habits to prevent your fears from becoming reality." - [Extreme Programming Explained, _Kent Beck_](http://amzn.eu/d/1vSqAWa)
-The promise of any methodology is that it will help you manage certain [Hidden Risks](../thinking/Glossary.md#hidden-risk). But this comes at the expense of the _effort_ you put into the practices of the methodology.
+The promise of any methodology is that it will help you manage certain [Hidden Risks](/thinking/Glossary.md#hidden-risk). But this comes at the expense of the _effort_ you put into the practices of the methodology.
-A methodology offers us a route through the [Risk Landscape](../thinking/Glossary.md#risk-landscape), based on the risks that the designers of the methodology care about. When we use the methodology, it means that we are baking into our behaviour actions to avoid those risks.
+A methodology offers us a route through the [Risk Landscape](/thinking/Glossary.md#risk-landscape), based on the risks that the designers of the methodology care about. When we use the methodology, it means that we are baking into our behaviour actions to avoid those risks.
### Methodological Failure
-When we [take action](../thinking/Glossary.md#taking-action) according to a methodology, we expect the [Payoff](../thinking/Glossary.md#payoff), and if this doesn't materialise, then we feel the methodology is failing us. It could just be that it is inappropriate to the _type of project_ we are running. Our [Risk Landscape](../thinking/Glossary.md#risk-landscape) may not be the one the designers of the methodology envisaged. For example:
+When we [take action](/thinking/Glossary.md#taking-action) according to a methodology, we expect the [Payoff](/thinking/Glossary.md#payoff), and if this doesn't materialise, then we feel the methodology is failing us. It could just be that it is inappropriate to the _type of project_ we are running. Our [Risk Landscape](/thinking/Glossary.md#risk-landscape) may not be the one the designers of the methodology envisaged. For example:
- NASA [doesn't follow an agile methodology](https://swehb.nasa.gov/display/7150/SWEREF-278) when launching space craft: there's no two-weekly launch that they can iterate over, and the the risks of losing a rocket or satellite are simply too great to allow for iteration in production. The risk profile is just all wrong: you need to manage the risk of _losing hardware_ over the risk of _requirements changing_.
@@ -136,7 +136,7 @@ An off-the-shelf methodology is unlikely to fit the risks of any project exactly
![Methodologies, Actions, Risks, Goals](/img/generated/executive-summary/pattern_language.png)
-As the above diagram shows, different methodologies advocate different practices, and different practices manage different risks. If we want to understand methodologies, or choose practices from one, we really need to understand the _types of risks_ we face on software projects. This is where we [go next](../risks/Start.md).
+As the above diagram shows, different methodologies advocate different practices, and different practices manage different risks. If we want to understand methodologies, or choose practices from one, we really need to understand the _types of risks_ we face on software projects. This is where we [go next](/risks/Start.md).
The last part of this track is the [Glossary](Glossary.md), which summarises all the new terms we've covered here.
diff --git a/docs/thinking/Risk-First-Diagrams.md b/docs/thinking/Risk-First-Diagrams.md
index af3097eac..e3318218b 100644
--- a/docs/thinking/Risk-First-Diagrams.md
+++ b/docs/thinking/Risk-First-Diagrams.md
@@ -52,7 +52,7 @@ In the middle of a Risk-First diagram we see the actions you could take. In the
### On The Right
-_Nothing comes for free._ On the right, you can see the consequence or outcome of the actions you've taken: [Attendant Risks](../thinking/Glossary.md#attendant-risk) are the _new_ risks you now have as a result of taking the action.
+_Nothing comes for free._ On the right, you can see the consequence or outcome of the actions you've taken: [Attendant Risks](/thinking/Glossary.md#attendant-risk) are the _new_ risks you now have as a result of taking the action.
Hosting a dinner party opens you up to attendant risks like "Not Enough to Eat". As a result of that risk, we consider buying lots of snacks. As a result of _that_ risk, we start to consider whether our guests will be impressed with that.
@@ -60,11 +60,11 @@ Hosting a dinner party opens you up to attendant risks like "Not Enough to Eat".
It's worth pointing out that sometimes _the cure is worse than the disease_.
-By [Taking Action](../thinking/Glossary.md#taking-action) you might end up in a worse predicament than you started. For example, cutting your legs off _would definitely cure your in-growing toenail_. We have to use our judgement to decide on the right course of action!
+By [Taking Action](/thinking/Glossary.md#taking-action) you might end up in a worse predicament than you started. For example, cutting your legs off _would definitely cure your in-growing toenail_. We have to use our judgement to decide on the right course of action!
### A Balance of Risk
-So Risk-First diagrams represent a [balance of risk](../thinking/Glossary.md#balance-of-risk): whether or not you choose to take the action will depend on your evaluation of this balance. Are the things on the left worse or better than the things on the right?
+So Risk-First diagrams represent a [balance of risk](/thinking/Glossary.md#balance-of-risk): whether or not you choose to take the action will depend on your evaluation of this balance. Are the things on the left worse or better than the things on the right?
### Cause and Effect
@@ -72,7 +72,7 @@ So Risk-First diagrams represent a [balance of risk](../thinking/Glossary.md#bal
You can think about a Risk-First diagram in a sense as a way of visualising _cause and effect_. In _biological terms_ this is called the [Stimulus-Response Model](https://en.wikipedia.org/wiki/Stimulus–response_model), or sometimes, as shown in the diagram above, Stimulus-Response-Outcome. The items on the left of the diagram are the _stimulus_ part: they're the thing that makes us [Take Action](Glossary.md#taking-action) in the world. The middle part (the action) is the response and the right side is the outcome.
-There are [all kinds of risks](../risks/Risk-Landscape.md) we face in life and we attach different value or _criticality_ to them. Most people will want to take action against the worst risks they face in their lives and maybe put up with some of the lesser ones. Equally, we should also try and achieve our _most critical_ goals and let the lesser ones slide (at least, from a rational standpoint).
+There are [all kinds of risks](/risks/Risk-Landscape.md) we face in life and we attach different value or _criticality_ to them. Most people will want to take action against the worst risks they face in their lives and maybe put up with some of the lesser ones. Equally, we should also try and achieve our _most critical_ goals and let the lesser ones slide (at least, from a rational standpoint).
### Functions
@@ -89,27 +89,27 @@ There are a few other bits and pieces that crop up in these diagrams that we sho
### Containers For _Internal Models_
-The risks on the left and right are contained in rounded-boxes. That's because risks live in our [Internal Models](../thinking/Glossary.md#internal-model) - they're not real-world things you can reach out and touch. They're _contained_ in things like brains, spreadsheets, reports and programs.
+The risks on the left and right are contained in rounded-boxes. That's because risks live in our [Internal Models](/thinking/Glossary.md#internal-model) - they're not real-world things you can reach out and touch. They're _contained_ in things like brains, spreadsheets, reports and programs.
#### Example: Blaming Others
![Blame Game](/img/generated/introduction/blame.png)
-In the above diagram, you can see how Jim is worried about his job security, probably because he's made a mistake at work. Therefore, in his [Internal Model](../thinking/Glossary.md#internal-model) he has [Funding Risks](../risks/Scarcity-Risk.md#funding-risk), i.e. he's worried about money.
+In the above diagram, you can see how Jim is worried about his job security, probably because he's made a mistake at work. Therefore, in his [Internal Model](/thinking/Glossary.md#internal-model) he has [Funding Risks](/tags/Funding-Risk), i.e. he's worried about money.
-What does he do? His [Action](../thinking/Glossary.md#taking-action) is to blame Bob. If all goes according to plan, Jim has dealt with his risk and now Bob has the problems instead.
+What does he do? His [Action](/thinking/Glossary.md#taking-action) is to blame Bob. If all goes according to plan, Jim has dealt with his risk and now Bob has the problems instead.
### Mitigated and Hidden Risk
![Mitigated and Hidden](/img/generated/introduction/hidden-mitigated.png)
-The diagram above shows two other marks we use quite commonly: we put a "strike" through a risk to show that it's been dealt with in some way and the "cloud" icon denotes [Hidden Risks](../thinking/Glossary.md#hidden-risk)- those _unknown unknowns_ that we couldn't have predicted in advance.
+The diagram above shows two other marks we use quite commonly: we put a "strike" through a risk to show that it's been dealt with in some way and the "cloud" icon denotes [Hidden Risks](/thinking/Glossary.md#hidden-risk)- those _unknown unknowns_ that we couldn't have predicted in advance.
### Artifacts
![Artifacts](/img/generated/introduction/artifacts.png)
-Sometimes, we add _artifacts_ to Risk-First diagrams. That is, real-world things such as people, documents, code, servers and so on. This is because as well as changing [Internal Models](../thinking/Glossary.md#internal-model), [Taking Action](../thinking/Glossary.md#taking-action) will produce real results and consume inputs in order to do so. So, it's sometimes helpful to include these on the diagram. Some examples are shown in the diagram above.
+Sometimes, we add _artifacts_ to Risk-First diagrams. That is, real-world things such as people, documents, code, servers and so on. This is because as well as changing [Internal Models](/thinking/Glossary.md#internal-model), [Taking Action](/thinking/Glossary.md#taking-action) will produce real results and consume inputs in order to do so. So, it's sometimes helpful to include these on the diagram. Some examples are shown in the diagram above.
### Causation and Correlation
diff --git a/docs/thinking/Track-Risk.md b/docs/thinking/Track-Risk.md
index ec7acc5e3..ae9f738de 100644
--- a/docs/thinking/Track-Risk.md
+++ b/docs/thinking/Track-Risk.md
@@ -25,7 +25,7 @@ In this section we're going to look at the importance of keeping track of risks.
Most developers are familiar with recording issues in an issue tracker. As we saw in [Just Risk](Just-Risk.md), _issues are a type of risk_, so it makes sense that issue trackers could be used for recording all project risks. Within risk management, this is actually called a [Risk Register](https://en.wikipedia.org/wiki/Risk_register). Typically, this will include for each risk:
- The **name** of the risk, or other identifier.
- - A **categories** to which the risk belongs (this is the focus of the [Risk Landscape](../risks/Risk-Landscape.md) section in Part 2).
+ - A **categories** to which the risk belongs (this is the focus of the [Risk Landscape](/risks/Risk-Landscape.md) section in Part 2).
- A **brief description** or name of the risk to make the risk easy to discuss.
- Some estimate for the **Impact**, **Probability** or **Risk Score** of the risk.
- Proposed actions and a log of the progress made to manage the risk.
@@ -64,7 +64,7 @@ We'll come back to this in a minute.
Arguably, Risk-First uses the term 'Risk' wrongly: most literature suggests [risk can be measured](https://keydifferences.com/difference-between-risk-and-uncertainty.html) whereas uncertainty represents things that cannot.
-I am using **risk** everywhere because later we will talk about specific risks (e.g. [Boundary Risk](../risks/Boundary-Risk.md) or [Complexity Risk](../risks/Complexity-Risk.md)) and it doesn't feel grammatically correct to talk about those as **uncertainties**.
+I am using **risk** everywhere because later we will talk about specific risks (e.g. [Boundary Risk](/tags/Boundary-Risk) or [Complexity Risk](/tags/Complexity-Risk)) and it doesn't feel grammatically correct to talk about those as **uncertainties**.
Additionally there is pre-existing usage in Banking of terms like [Operational Risk](https://en.wikipedia.org/wiki/Operational_risk) or [Reputational risk](https://www.investopedia.com/terms/r/reputational-risk.asp) which are also not really a-priori measurable.
@@ -92,11 +92,11 @@ Much more likely, it will have a field for _priority_, or allow the ordering of
A risk matrix presents a graphical view on where risks exist. The diagram above is an example, showing the risks from the dinner party in the [A Simple Scenario](A-Simple-Scenario.md) section. The useful thing about this visualisation is it helps focus attention on the risks at the top and to the right - those with the biggest impact and probability.
-Risks at the bottom or left side of the diagram are candidates for being ignored or simply "accepted" (which we'll come to in a [later section](De-Risking#retain)). If you're using something like [Scrum](../practices/Glossary-Of-Practices.md#scrum), then these might be issues that you remove in the process of [backlog refinement](../practices/Glossary-Of-Practices.md#backlog-refinement).
+Risks at the bottom or left side of the diagram are candidates for being ignored or simply "accepted" (which we'll come to in a [later section](De-Risking#retain)). If you're using something like [Scrum](/practices/Glossary-Of-Practices.md#scrum), then these might be issues that you remove in the process of [backlog refinement](/practices/Glossary-Of-Practices.md#backlog-refinement).
## Incorporating Payoff
-The diagram above is _helpful_ in deciding what to focus on next, but it doesn't consider [Payoff](../thinking/Glossary.md#payoff). The reason for this is that up until this point, we've been tracking risks but not necessarily figuring out what to do about them. Quite often when I raise an issue on a project I will also include the details of the fix for that issue, or maybe I'll _only_ include the details of the fix.
+The diagram above is _helpful_ in deciding what to focus on next, but it doesn't consider [Payoff](/thinking/Glossary.md#payoff). The reason for this is that up until this point, we've been tracking risks but not necessarily figuring out what to do about them. Quite often when I raise an issue on a project I will also include the details of the fix for that issue, or maybe I'll _only_ include the details of the fix.
For example, let's say I raise an issue saying that I want a button to sort an access control list by the surnames of the users in the list. What am I really getting at here? This could be a solution to the problem that _I'm wasting time looking for users in a list_. Alternatively, it could be trying to solve the problem that _I'm struggling to keep the right people on the list_. Or maybe both. The risk of the former is around wasted time (for me) but the risk of the latter might be a security risk and might be higher priority.
@@ -112,16 +112,16 @@ _Really good design_ would be coming up with a course of action that takes care
## Criticism
-One of the criticisms of the [Risk Register](Track-Risk.md#risk-registers) approach is that of [mistaking the map for the territory](../risks/Map-And-Territory-Risk.md). That is, mistakenly believing that what's on the Risk Register _is all there is_.
+One of the criticisms of the [Risk Register](Track-Risk.md#risk-registers) approach is that of [mistaking the map for the territory](/tags/Map-And-Territory-Risk). That is, mistakenly believing that what's on the Risk Register _is all there is_.
-In the preceding discussions, I have been careful to point out the existence of [Hidden Risks](../thinking/Glossary.md#hidden-risk) for that very reason. Or, to put another way:
+In the preceding discussions, I have been careful to point out the existence of [Hidden Risks](/thinking/Glossary.md#hidden-risk) for that very reason. Or, to put another way:
> "What we don't know is what usually gets us killed" - [Petyr Baelish, _Game of Thrones_](https://medium.com/@TanyaMardi/petyr-baelishs-best-quotes-on-game-of-thrones-1ea92968db5c)
Donald Rumsfeld's famous [Known Knowns](https://en.wikipedia.org/wiki/There_are_known_knowns) is also a helpful conceptualisation:
- - **A _known_ unknown** is an [Attendant Risk](../thinking/Glossary.md#attendant-risk). i.e. something you are aware of, but where the precise degree of threat can't be established.
- - **An _unknown_ unknown** is a [Hidden Risk](../thinking/Glossary.md#hidden-risk). i.e a risk you haven't even thought to exist yet.
+ - **A _known_ unknown** is an [Attendant Risk](/thinking/Glossary.md#attendant-risk). i.e. something you are aware of, but where the precise degree of threat can't be established.
+ - **An _unknown_ unknown** is a [Hidden Risk](/thinking/Glossary.md#hidden-risk). i.e a risk you haven't even thought to exist yet.
## Out of the Window
diff --git a/docusaurus.config.js b/docusaurus.config.js
index 89eb0eca2..843c1b083 100644
--- a/docusaurus.config.js
+++ b/docusaurus.config.js
@@ -12,10 +12,10 @@ const navLinks = [ { to: '/overview/Start', label: 'Overview', position: 'left'
{ to: '/thinking/Start', label: 'Thinking', position: 'left' },
{ to: '/risks/Start', label: 'Risks', position: 'left' },
{ to: '/practices/Start', label: 'Practices', position: 'left' },
- { to: '/bets/Start', label: 'Bets', position: 'left' },
{ to: '/methods/Start', label: 'Methods', position: 'left' },
- { to: '/estimating/Start', label: 'Estimating', position: 'left' },
{ to: '/books/Start', label: 'Books', position: 'left' },
+ { to: '/bets/Start', label: 'Bets', position: 'left' },
+ { to: '/estimating/Start', label: 'Estimating', position: 'left' },
{ to: '/presentations/Start', label: 'Presentations', position: 'left' },
]
diff --git a/src/theme/PracticeIntro/index.js b/src/theme/PracticeIntro/index.js
index ea9d87964..96d94474c 100644
--- a/src/theme/PracticeIntro/index.js
+++ b/src/theme/PracticeIntro/index.js
@@ -5,8 +5,13 @@ import { usePluginData } from '@docusaurus/useGlobalData'
import { useLocation } from '@docusaurus/router';
function formatReadableTag(page) {
- return page.replaceAll("-", " ").substring(page.lastIndexOf("/")+1)
-}
+ var out = page.replaceAll("-", " ").substring(page.lastIndexOf("/")+1)
+ if (out.indexOf("#") > -1) {
+ out = out.substr(0, out.indexOf("#"))
+ }
+
+ return out
+ }
function tagUrl(tag) {
return "/tags/"+tag.replaceAll(" ", "-")
@@ -88,7 +93,7 @@ export default ({details}) => {
sortedAka.map(m => )
}
- Related Practices
+ Related
{
details.practice.related.map(i => - {formatReadableTag(i)}
)
diff --git a/src/theme/TagList/index.js b/src/theme/TagList/index.js
index fbab32903..bb353f99f 100644
--- a/src/theme/TagList/index.js
+++ b/src/theme/TagList/index.js
@@ -26,6 +26,11 @@ function DocItemImage({ doc }) {
);
}
+const sorts = {
+ "title" : (a, b) => { return a.title.localeCompare(b.title) },
+ "default" : (a, b) => { return a.order - b.order }
+}
+
export default function TagList(props) {
@@ -45,8 +50,10 @@ export default function TagList(props) {
const filter = props.filter ? '/' + props.filter + '/' : ''
const location = useLocation().pathname;
+
+ const sort = props.sort ?? "default"
- oneTag.sort((a, b) => a.order - b.order);
+ oneTag.sort(sorts[sort]);
// console.log(oneTag[0].permalink.indexOf(location))
// console.log(filter)
diff --git a/unused_content/complexity/Crystals-And-Code.md b/unused_content/complexity/Crystals-And-Code.md
index b0fedb3a5..5560f3f97 100644
--- a/unused_content/complexity/Crystals-And-Code.md
+++ b/unused_content/complexity/Crystals-And-Code.md
@@ -37,7 +37,7 @@ Most IS's start small, and grow from there (eBay started with a single niche mar
What properties of IS's are like the _regularity_ we see in crystals? How about things like:
- - **Managed Data**, with clear, consistent, interacting data-types. In distributed systems, there will be a policy on [CAP](../risks/Coordination-Risk.md#cap-theorem). There is likely to be a high degree of data **Normalization** and a **well-Factored** design.
+ - **Managed Data**, with clear, consistent, interacting data-types. In distributed systems, there will be a policy on [CAP](/risks/Coordination-Risk.md#cap-theorem). There is likely to be a high degree of data **Normalization** and a **well-Factored** design.
- **ACID** properties, such as Atomicity, Consistency, Isolation, Durability of transactions.
- **SLAs**: Response times, ownership, procedures, and other _non-functional requirements_ that are clearly defined.
- **Support Teams and Knowledge Bases**: there are procedures in place for _understanding and using_ IS's.
@@ -127,7 +127,7 @@ Perhaps you can "grow" the IS (defects and all) in the direction required by the
This means setting up a new team somewhere who are allowed to "iterate rapidly" building something new without the weight of history and existing defects slowing them down. As in Stage 2, eventually, if this team are successful, there will be new defects to resolve, between the new and the old systems.
-Defects in the crystalline structure are effectively another way to envisage [Technical Debt](../risks/Complexity-Risk.md#technical-debt).
+Defects in the crystalline structure are effectively another way to envisage [Technical Debt](/risks/Complexity-Risk.md#technical-debt).
## Diamonds Aren't Forever
diff --git a/unused_content/complexity/End-Of-Complexity.md b/unused_content/complexity/End-Of-Complexity.md
index 5da6c839e..0870badd2 100644
--- a/unused_content/complexity/End-Of-Complexity.md
+++ b/unused_content/complexity/End-Of-Complexity.md
@@ -60,7 +60,7 @@ There are various efforts to create out-of-the-box solutions:
- Things like [CodeAnywhere](https://codeanywhere.com), [GitHub](https://github.com) and [GitLab](https://gitlab.com) go a long way to simplifying tool choice (although you still have to choose one of those).
- [AWS](https://aws.amazon.com) has a "menu" of services for you to choose from (although this too is bewildering).
-Ultimately, curators are running to stand still, facing huge [Red-Queen Risk](../risks/Scarcity-Risk.md#red-queen-risk). Their efforts to consolidate and simplify the landscape can't possibly keep up with the pace of evolution in the space they are working in.
+Ultimately, curators are running to stand still, facing huge [Red-Queen Risk](/tags/Red-Queen-Risk). Their efforts to consolidate and simplify the landscape can't possibly keep up with the pace of evolution in the space they are working in.
## Potential Solution #5: AI
diff --git a/unused_content/complexity/Start.md b/unused_content/complexity/Start.md
index 43d7dbc76..c74eea3a0 100644
--- a/unused_content/complexity/Start.md
+++ b/unused_content/complexity/Start.md
@@ -27,6 +27,6 @@ This series of articles aims to give you:
## Background Reading
-This trail assumes familiarity with the [Risk-First Risks](../risks/Risk-Landscape.md), especially [Complexity Risk](../risks/Complexity-Risk.md).
+This trail assumes familiarity with the [Risk-First Risks](/risks/Risk-Landscape.md), especially [Complexity Risk](/tags/Complexity-Risk).
\ No newline at end of file
diff --git a/unused_content/misc/Next-Scrum.md b/unused_content/misc/Next-Scrum.md
index 560ee7d02..c3df2d414 100644
--- a/unused_content/misc/Next-Scrum.md
+++ b/unused_content/misc/Next-Scrum.md
@@ -24,9 +24,9 @@ But nevertheless, I'm still seeing a fair bit of traffic over the last couple of
It's got me thinking about what I want to do next on this project. A quick recap:
-- [Thinking Risk-First](../thinking/Start.md) was explaining how Software Development is really an exercise in risk management. Although that sounds a bit dull (maybe complex even), I try to explain it really simply.
+- [Thinking Risk-First](/thinking/Start.md) was explaining how Software Development is really an exercise in risk management. Although that sounds a bit dull (maybe complex even), I try to explain it really simply.
-- [The Risk Catalog](../risks/Start.md) looks at the types of risks we face in Software Development. I spend a while breaking down the different types.
+- [The Risk Catalog](/risks/Start.md) looks at the types of risks we face in Software Development. I spend a while breaking down the different types.
So next, we should look at the different _techniques/practices/actions_ we use, and explain the qualities of them. Generally, all activity boils down into something like this:
diff --git a/unused_content/misc/Post-Agile.md b/unused_content/misc/Post-Agile.md
index 5d862050e..c88462645 100644
--- a/unused_content/misc/Post-Agile.md
+++ b/unused_content/misc/Post-Agile.md
@@ -29,7 +29,7 @@ Management (with a capital-M) has finally bought into Agile techniques and indus
![A Hype Cycle](/img/numbers/hype4.png)
-After 20 years, it seems like Agile has finally crested the [Hype-Cycle](../risks/Map-And-Territory-Risk.md#audience) and arrived at the _Peak of Inflated Expectations_.
+After 20 years, it seems like Agile has finally crested the [Hype-Cycle](/risks/Map-And-Territory-Risk.md#audience) and arrived at the _Peak of Inflated Expectations_.
But many authors on the Internet have started criticising Agile in its current state. This short article will summarise some of the recent discussion around "Post-Agile" as a movement.
diff --git a/unused_content/misc/Timed-Thinking.md b/unused_content/misc/Timed-Thinking.md
index 55acf0f8b..feef21a77 100644
--- a/unused_content/misc/Timed-Thinking.md
+++ b/unused_content/misc/Timed-Thinking.md
@@ -60,11 +60,11 @@ Ideally, you'll want to perform debugging experiments that -whatever the outcome
### Designing
-As you might expect, a [Risk-First](https://riskfirst.org) approach to software design would be one where you don't introduce unnecessary risk to your project, be it in the form of [Dependency Risks](../risks/Dependency-Risk.md) (try not to add them), [Complexity Risks](../risks/Complexity-Risk.md) (keep the codebase nice and tight), [Feature Risks](/tags/Feature-Risk) (make sure you're building the right thing) and so on.
+As you might expect, a [Risk-First](https://riskfirst.org) approach to software design would be one where you don't introduce unnecessary risk to your project, be it in the form of [Dependency Risks](/tags/Dependency-Risk) (try not to add them), [Complexity Risks](/tags/Complexity-Risk) (keep the codebase nice and tight), [Feature Risks](/tags/Feature-Risk) (make sure you're building the right thing) and so on.
It's tempting to just throw code together and then hammer out the issues as you go. Maybe even, this is how some people think Agile should work.
-But you can do a lot of work up-front with Timed Thinking. Take your design. Think hard about it using the above technique. Consider all the [Risks](../risks/Risk-Landscape.md) from the Risk-First collection. Work out which ones are going to sink you. Can you re-design to avoid them entirely? Probably you can.
+But you can do a lot of work up-front with Timed Thinking. Take your design. Think hard about it using the above technique. Consider all the [Risks](/risks/Risk-Landscape.md) from the Risk-First collection. Work out which ones are going to sink you. Can you re-design to avoid them entirely? Probably you can.
You can also think about this from a constraints point-of-view. Start the session by enumerating all the constraints you are under. Then, start to try and design within the space that's left. Or, start with the design _you already have in mind_ and subject it to all the constraints you can think of. Even within half an hour, this can be tremendously insightful.