diff --git a/docs/estimating/Stop-Estimating-Start-Navigating.md b/docs/estimating/Stop-Estimating-Start-Navigating.md
deleted file mode 100644
index ffe9b284d..000000000
--- a/docs/estimating/Stop-Estimating-Start-Navigating.md
+++ /dev/null
@@ -1,122 +0,0 @@
----
-title: Stop Estimating, Start Navigating
-description: Part of the 'Estimating' Risk-First Track, looking at how to work without estimates.
-
-featured:
- class: bg1
- element: 'Risk-First Analysis'
-sidebar_position: 9
----
-
-![Under Construction](/img/state/uc.png)
-
-# Stop Estimating, Start Navigating
-
-This is the _ninth_ article in the [Risk-First](https://riskfirst.org) track on [Estimating](Start). We've come a long way:
-
-- In the first four articles, [Fill-The-Bucket](Fill-The-Bucket), [Kitchen Cabinet](Kitchen-Cabinet), [Journeys](Journeys) and [Fractals](Fractals) we looked at the various reasons why estimating is such a nightmare on software projects. This is summarised in [Analogies](Analogies). The upshot is that predictable, well understood, repeatable things can be estimated with some confidence. However, as soon as software is predictable, repeatable and well-understood, _you're doing it wrong_.
-
-- In article seven, we explored how [Scrum](Fixing-Scrum), the popular Agile methodology, fails to understand this crucial problem with estimates (among other failings).
-
-- Then, in [Risk-First Analysis](Risk-First-Analysis) we look at how we can work out what to build by examining what [risks](/tags/Risk) we'd like to address and which [goals](/tags/Risk) or [Upside Risks](/tags/Upside-Risk) we'd like to see happen.
-
-So, now we're up to date. It's article nine, and I was going to build on [Risk-First Analysis](Risk-First-Analysis) to show how to plan work for a team of people over a week, a month, a year.
-
-## Something Happened
-
-But then, Covid-19 happened. This is the first time I've started a new article since then, so for the UK, six months have passed. Luckily, the reason for this is not that I (or any of my close friends or relatives) were ill, but instead because of a realisation _Agile techniques as they were once imagined, are now all but impossible_.
-
-Let's look at a few examples:
-
- - **Lunch**: Kent Beck and others promoted the idea of developers eating together, and working in "Two-Pizza Teams". I haven't had lunch with my colleagues for over six months. And half of them are on a different continent to me anyway.
-
- - **Pair Programming**: Once the darling of [XP](http://www.extremeprogramming.org), Pair Programming was never hugely popular, although was often a useful tool to have in the arsenal. If I pair program now, it's via me sharing my screen with someone, while they watch and comment (or vice versa). I can't gesture at a bug while my partner is in control, and I find that I call out line-numbers a lot. This has changed, but not as much as...
-
- - **Stand-Ups**: Stand-up meetings were already dying. No-one liked standing up and they often got held pushed onto conference calls because people needed to dial in from home or the remote office they worked in. Whatever remnants of the classic "Agile Stand-Up" meeting existed prior to Covid-19 died immediately at that moment. This practice migrated to the chat windows of [Slack](https://slack.com) and [Teams](https://teams.microsoft.com), and became asynchronous.
-
- - **Planning Poker**: Another meeting, consigned to conference-call. Planning meetings are invariably tedious. It's hard to maintain the attention of all the developers when they are in the room together, let alone with the unlimited distractions of the whole Internet available right next to you. Planning sessions and "management off-sites" are at this point, broken.
-
- - **Post-It Notes**: At one time, post-it notes were arranged on a white-board to indicate work in progress. The planning meeting would be about re-defining the work on the notes, prioritising the notes, sub-dividing the work on the notes and allocating the notes to people to implement. But the notes have been in the bin a long time, and we use JIRA or GitHub issues instead,
-
-### Working Physically Together
-
-
-
-
-### Competing Goals
-
-
-A confounding problem with goals is that _everyone has their own_. While the business might be there to _make money_, everyone else on the project will have their own _personal_ goals (as you can see on the diagram above).
-
-Let's look at the "Fix the Continuous Integration Pipeline" task. It might turn out that we have competing goals here: the Development Team might want to spend time on this task, as the existing, poor CI tooling is damaging productivity and causing acrimony in the team. No one wants to work in an unproductive environment.
-
-That said, the product owners might worry about a different risk: while diverting part of the development effort to fixing CI might help productivity _in the long term_, it might add pressure to the schedule _in the short term_, and delay other important tasks from getting done, as shown in the above diagram.
-
-
-
-
-
-The third question you need to always be asking is: _what is our goal?_
-
-Considered individually, the tasks on our backlog clearly are operations which change the risks we are facing, but unless we understand the _goal_ or _goals_ of the product, we're not really in a position to make judgements about whether some set of risks is better or worse.
-
-![One Goal](/img/generated/estimating/planner/one-goal.svg)
-
-If we are a startup with some investors, they might have set us the goal themselves. Perhaps the future funding of the business is predicated on our ability to generate a certain number of subscribers? Then the business goal might look like that in the diagram above.
-
-##### The rounded-corner containers with bold titles show _who a risk affects_.
-
-As you can see in this diagram _goals_ look very similar to _risks_. This is by design: a _goal_ is really an "upside risk": it's a possible future, but one we'd like to _move towards_ instead of _away from_.
-
-
-
-This s
-
-Let's consider the third task: refactoring the subscription model.
-
--- image dbd
-
-The above diagram gives us some indication _why_ the tasks are on the backlog:
-
- - **Refactoring subscriptions** is all about the bottom line: there's a risk that the company _isn't profitable enough_. That might translate into management being replaced, or bankruptcy, or something.
- -
-
- - **The Search Function** addresses a risk that our _clients may go elsewhere_: they're annoyed with the product's lack of functionality.
-
-
-
-
-
- - Is velocity important?
-
- Scrum is constantly a race to get stuff done and meet estimates. Quite often, the estimates turn out to be BS.
-
- Here's the rub: 90% of everything I've ever written has gone in the bin.
-
- This means, if I just concentrated on doing the _useful_ stuff, I would be 10X better than I am now.
-
-What does that mean?
-
-> "Simplicity--the art of maximizing the amount of work not done--is essential."
-
-## Going Meta
-
-
-
-
-The problem is that estimation only addresses a single risk: runway risk/time resource. It says nothing about other risks that you might bump into.
-
-Why is all my code in the bin? I guess either it was badly written (which, probably it isn't, given that it's probably not objectively worse than the 10% that is in production) or, more likely, it didn't address [Feature Fit Risk](/tags/Feature-Fit-Risk) properly, or, it was useful, but people didn't find out about how amazing it was. Or, it was built to work on top of X, but then X was decommissioned (Dependency Risk) or, the budget was cut from the department and there was no funding (Dependency Risk... but maybe caused by Feature Fit Risk)?
-
-No estimates says forget about trying to get the numbers right, because you can't. What's better than that? Let's try and focus on reducing that 90% of waste by thinking about _risks other than time_.
-
-> "Ian: Your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should. " - [Ian Malcolm, _Jurassic Park_](https://en.wikipedia.org/wiki/Jurassic_Park).
-
-
-Risk-First Planning Game:
-
-X: time
-Y: importance
-
-Place risks on the board (as well as goals). Try and mitigate risks with actions. Consider whether
-
diff --git a/docs/practices/External-Relations/Outsourcing.md b/docs/practices/External-Relations/Outsourcing.md
index 01e4aec6b..03fae8fed 100644
--- a/docs/practices/External-Relations/Outsourcing.md
+++ b/docs/practices/External-Relations/Outsourcing.md
@@ -52,7 +52,7 @@ In the extreme, I've seen situations where the team at one location has decided
When this happens, it's because somehow the team feel that [Coordination Risk](/tags/Coordination-Risk) is more unmanageable than [Schedule Risk](/tags/Schedule-Risk).
-There are some mitigations here: video-chat, moving staff from location-to-location for face-time, frequent [show-and-tell](/tags/Review), or simply modularizing accross geographic boundaries, in respect of [Conway's Law](/tags/Coordination-Risk):
+There are some mitigations here: video-chat, moving staff from location-to-location for face-time, frequent [show-and-tell](/tags/Review), or simply modularizing across geographic boundaries, in respect of [Conway's Law](/tags/Coordination-Risk):
> "organizations which design systems ... are constrained to produce designs which are copies of the communication structures of these organizations." - _[M. Conway](https://en.wikipedia.org/wiki/Conways_law)_
diff --git a/docs/presentations/OpenSource/index.md b/docs/presentations/OpenSource/index.md
index 0dfe5844c..574b5ab70 100644
--- a/docs/presentations/OpenSource/index.md
+++ b/docs/presentations/OpenSource/index.md
@@ -116,7 +116,7 @@ hide_table_of_contents: true
-
Ok, so here’s a story from before I was at Deutsche Bank, when I was working with Credit Suisse in their Risk department. We were building a new risk calculator. And the process went something like this. The guy on the left, he’s the Analyst. He writes a requirements document, explaining exactly how he thinks the calcuator should work. Then, he goes to the pub. Often, for several days.
+
Ok, so here’s a story from before I was at Deutsche Bank, when I was working with Credit Suisse in their Risk department. We were building a new risk calculator. And the process went something like this. The guy on the left, he’s the Analyst. He writes a requirements document, explaining exactly how he thinks the calculator should work. Then, he goes to the pub. Often, for several days.
Next, the developer picks up these requirements, and starts programming.
diff --git a/docs/risks/Dependency-Risks/Agency-Risks/Agency-Risk.md b/docs/risks/Dependency-Risks/Agency-Risks/Agency-Risk.md
new file mode 100644
index 000000000..edc10379c
--- /dev/null
+++ b/docs/risks/Dependency-Risks/Agency-Risks/Agency-Risk.md
@@ -0,0 +1,261 @@
+---
+title: Agency Risk
+description: People all have their own agendas. What do you do about that?
+
+slug: /risks/Agency-Risk
+tags:
+ - Risks
+ - Goal
+ - Agency Risk
+ - Agent
+ - Security Risk
+definitions:
+ - name: Agent
+ description: blah
+featured:
+ class: c
+ element: '
'
+sidebar_position: 12
+tweet: yes
+part_of: Dependency Risk
+---
+
+
+
+Coordinating a team is difficult enough when everyone on the team has a single [Goal](/tags/Goal). But people have their own goals too. Sometimes their goals harmlessly co-exist with the team's goal, other times they don't.
+
+This is [Agency Risk](/tags/Agency-Risk).
+
+![Agency Risk](/img/generated/risks/agency/agency-risk.svg)
+
+In this section, we are going to take a closer look at how [Agency Risk](/tags/Agency-Risk) arises, in particular we will:
+
+ - apply the concept of [Agency Risk](/tags/Agency-Risk) in software development
+ - define a model for understanding [Agency Risk](/tags/Agency-Risk)
+ - look at some common issues in software development, and analyse how they have their roots in [Agency Risk](/tags/Agency-Risk)
+ - look at how [Agency Risk](/tags/Agency-Risk) applies to not just to people, but _whole teams_ and _software agents_
+ - look at the various ways to mitigate [Agency Risk](/tags/Agency-Risk), irrespective of what type of agent we are looking at. (We'll specifically consider _software agents_, _humans_ and _cells in the body_.)
+
+## Agency In Software Development
+
+To introduce [Agency Risk](/tags/Agency-Risk), let's first look at the **Principal-Agent Dilemma**. This term comes from finance and refers to the situation where you (the "principal") entrust your money to someone (the "agent") in order to invest it, but they don't necessarily have your best interests at heart. They may instead elect to invest the money in ways that help them, or outright steal it.
+
+> "This dilemma exists in circumstances where agents are motivated to act in their own best interests, which are contrary to those of their principals, and is an example of moral hazard." - [Principal-Agent Problem, _Wikipedia_](https://en.wikipedia.org/wiki/Principal–agent_problem)
+
+The less visibility you have of the agent's activities, the bigger the risk. However, the _whole point_ of giving the money to the agent was that you would have to spend less time and effort managing it, hence the dilemma.
+
+In software development, we're not lending each other money, but we _are_ being paid by the project sponsor, so they are assuming [Agency Risk](/tags/Agency-Risk) by employing us.
+
+[Agency Risk](/tags/Agency-Risk) doesn't just apply to people: it can apply to _running software_ or _whole teams_ - anything which has agency over its actions.
+
+> "Agency is the capacity of an actor to act in a given environment... Agency may either be classified as unconscious, involuntary behaviour, or purposeful, goal directed activity (intentional action). " - [Agency, _Wikipedia_](https://en.wikipedia.org/wiki/Agency_(philosophy))
+
+## A Model For Agency Risk
+
+![Goal Hierarchy](/img/generated/risks/agency/hierarchy.svg)
+
+Although the definition of [Agency Risk](/tags/Agency-Risk) above pertains to looking after other people's money, this is just a single example of a wider issue which is best understood by appreciating that humans have a _hierarchy of concern_ with respect to their goals, as shown in the diagram above. This hierarchy has arisen from millennia of evolution and helps us prioritise competing goals, generally in favour of _preserving our genes_.
+
+The model above helps us explain the principal-agent problem: when faced with the dilemma of self-interest (perhaps protecting their family) vs. their employer, they will choose their family. But it goes further - this model explains a lot of human behaviour. It explains why some people:
+
+ - will help their friends and colleagues every day, but perhaps fail to give to charities helping people in far worse conditions.
+ - love their pets (who they consider in the _immediate family_ group) but eat other animals (somewhere off the bottom).
+ - why people can be fiercely _nationalistic_ and tribal (supporting the goals of the third level) and also be against _immigration_ (helping people in the fourth level).
+
+[Agency Risk](/tags/Agency-Risk) clearly includes the behaviour of [Bad Actors](https://en.wiktionary.org/wiki/bad_actor) but is not limited to them: there are various "shades of grey" involved. We can often understand and sympathise with the decisions agents make based on an understanding of this hierarchy.
+
+**NB:** Don't get hung up on the fact the diagram only has four levels. You might want to add other levels in their depending on your personal circumstances. The take-away is that there is a hierarchy at all, and that at the top, the people/things we care about _most_ are few in number.
+
+## Agency Risk In Software Development
+
+We shouldn't expect people on a project to sacrifice their personal lives for the success of the project, right? Except that ["Crunch Time"](https://en.wikipedia.org/wiki/Video_game_developer#"Crunch_time") is exactly how some software companies work:
+
+> "Game development... requires long working hours and dedication... Some video game developers (such as Electronic Arts) have been accused of the excessive invocation of 'crunch time'. 'Crunch time' is the point at which the team is thought to be failing to achieve milestones needed to launch a game on schedule. " - [Crunch Time, _Wikipedia_](https://en.wikipedia.org/wiki/Video_game_developer#"Crunch_time")
+
+People taking time off, going to funerals, looking after sick relatives and so on are all acceptable forms of [Agency Risk](/tags/Agency-Risk). They are the a risk of having _staff_ rather than _slaves_.
+
+![Heroism](/img/generated/risks/agency/heroism.svg)
+
+Where an agent _excessively_ prioritises their own goals over the group we term this selfishness or perhaps nepotism. Conversely, putting the tribe's or the team's needs over your own is _heroism_.
+
+### The Hero
+
+> "The one who stays later than the others is a hero. " - [Hero Culture, _Ward's Wiki_](https://wiki.c2.com/?HeroCulture)
+
+Heroes put in more hours and try to rescue projects single-handedly, often cutting corners like team communication and process in order to get there.
+
+Sometimes projects don't get done without heroes. But other times, the hero has an alternative agenda to just getting the project done:
+
+- A need for control and for their own vision.
+- A preference to work alone.
+- A desire for recognition and acclaim from colleagues.
+- For the job security of being a [Key Person](https://en.wikipedia.org/wiki/Key_person_insurance).
+
+A team _can_ make use of heroism but it's a double-edged sword. The hero can become [a bottleneck](/tags/Coordination-Risk) to work getting done and because they want to solve all the problems themselves, they [under-communicate](/tags/Communication-Risk).
+
+### CV Building
+
+CV Building is when someone decides that the project needs a dose of "Some Technology X", but in actual fact, this is either completely unhelpful to the project (incurring large amounts of [Complexity Risk](/tags/Complexity-Risk)), or merely a poor alternative to something else.
+
+It's very easy to spot CV building: look for choices of technology that are incongruently complex compared to the problem they solve and then challenge by suggesting a simpler alternative.
+
+### Devil Makes Work
+
+Heroes can be useful, but _underused_ project members are a nightmare. The problem is, people who are not fully occupied begin to worry that actually the team would be better off without them, and then wonder if their jobs are at risk.
+
+Even if they don't worry about their jobs, sometimes they need ways to stave off _boredom_. The solution to this is "busy-work": finding tasks that, at first sight, look useful, and then delivering them in an over-elaborate way that'll keep them occupied. This is also known as [_Gold Plating_](https://en.wikipedia.org/wiki/Gold_plating_(software_engineering)). This will leave you with more [Complexity Risk](/tags/Complexity-Risk) than you had in the first place.
+
+### Pet Projects
+
+> "A project, activity or goal pursued as a personal favourite, rather than because it is generally accepted as necessary or important." - [Pet Project, _Wiktionary_](https://www.wordnik.com/words/pet%20project)
+
+Sometimes budget-holders have projects they value more than others without reference to the value placed on them by the business. Perhaps the project has a goal that aligns closely with the budget holder's passions, or it's related to work they were previously responsible for.
+
+Working on a pet project usually means you get lots of attention (and more than enough budget), but it can fall apart very quickly under scrutiny.
+
+### Morale Failure
+
+![Morale Failure](/img/generated/risks/agency/morale.svg)
+
+> "Morale, also known as Esprit de Corps, is the capacity of a group's members to retain belief in an institution or goal, particularly in the face of opposition or hardship" - [Morale, _Wikipedia_](https://en.wikipedia.org/wiki/Morale)
+
+Sometimes the morale of the team or individuals within it dips, leading to lack of motivation. Losing morale is a kind of [Agency Risk](/tags/Agency-Risk) because it really means that a team member or the whole team isn't committed to the [Goal](/tags/Goal) and may decide their efforts are best spent elsewhere. Morale failure might be caused by:
+
+ - **External Factors**: perhaps the employee's dog has died, or they're simply tired of the industry, or are not feeling challenged.
+ - **The goal feels unachievable**: in this case people won't commit their full effort to it. This might be due to a difference in the evaluation of the risks on the project between the team members and the leader. In military science, a second meaning of morale is how well supplied and equipped a unit is. This would also seem like a useful reference point for IT projects. If teams are under-staffed or under-equipped, it will impact on motivation too.
+ - **The goal isn't sufficiently worthy**, or the team doesn't feel sufficiently valued.
+
+## Agency Elsewhere
+
+In the examples above, we've looked at hierarchy of goals for _most people_. It doesn't always play out like this and the structure is quite fluid. Some examples:
+
+ - In 2018, a 15-year-old [Greta Thunberg](https://en.wikipedia.org/wiki/Greta_Thunberg) gave up her education goals to campaign outside parliament in Sweden. She is now widely recognised as a key figure in climate activism.
+ - Steve Jobs, despite designing amazing hardware at Apple Computers, was a self-confessed [terrible father](https://en.wikipedia.org/wiki/Steve_Jobs#Family) and [failed to look after himself when diagnosed with cancer](https://en.wikipedia.org/wiki/Steve_Jobs#Health_problems).
+ - Less specifically, soldiers often form very close bonds due to their reliance on each other for survival, akin to family members (_brothers in arms_).
+
+### Animals
+
+Given the fluidity of the goal hierarchy for people, we shouldn't be surprised that other animals don't have the same priorities. For example, [Colobopsis Saundersi](https://en.wikipedia.org/wiki/Colobopsis_saundersi) is a species of ant that can explode suicidally and aggressively as an ultimate act of defence. Given that individual ants are not capable of reproduction, it seems to make sense that they would sacrifice themselves for the good of the colony: to _not_ do so would reduce the colony's chance of surviving and reproducing.
+
+### Software Processes
+
+![Software Goals](/img/generated/risks/agency/software.svg)
+
+Compared to humans, most software has a simple goal hierarchy, as shown in the diagram above. Nevertheless, there is significant [Agency Risk](/tags/Agency-Risk) in running software _at all_. Since computer systems follow rules we set for them, we shouldn't be surprised when those rules have exceptions that lead to disaster. For example:
+
+ - A process continually writing log files until the disks fill up, crashing the system.
+ - Bugs causing data to get corrupted, causing financial loss.
+ - Malware exploiting weaknesses in a system, exposing sensitive data.
+
+### Paperclips
+
+Building software systems that try to optimise for a hierarchy of goals (like humans do) is still a research project. But it is something AI researchers such as [Nick Bostrom](https://en.wikipedia.org/wiki/Nick_Bostrom) worry about. Consider his AI thought experiment:
+
+> "If you give an artificial intelligence an explicit goal – like maximizing the number of paper clips in the world – and that artificial intelligence has gotten smart enough to the point where it is capable of inventing its own super-technologies and building its own manufacturing plants, then, well, be careful what you wish for." -- [Nick Bostrom, _Wikipedia_](https://en.wikipedia.org/wiki/Universal_Paperclips#Themes)
+
+![Universal Paperclips](/img/generated/risks/agency/paperclips.svg)
+
+Bostrom worries that humanity would be steamrollered accidentally whilst trying to maximise the paperclip goal. The AI need not be malevolent - it's enough that it just requires the resources that keep us alive!
+
+This problem may be a long way off. In any case it's not really in our interests to build AI systems that prioritise their own survival. As humans, we have inherited the survival goal through evolution: an AI wouldn't necessarily have this goal unless we subjected AI development to some kind of evolutionary survival-of-the-fittest process too.
+
+### Teams
+
+[Agency Risk](/tags/Agency-Risk) applies to _whole teams_ too. It's perfectly possible that a team within an organisation develops [Goals](/tags/Goal) that don't align with those of the overall organisation. For example:
+
+ - A team introduces excessive [Bureaucracy](Process-Risk#bureaucracy) in order to avoid work it doesn't like.
+ - A team gets obsessed with a particular technology, or their own internal process improvement, at the expense of delivering business value.
+ - A marginalised team forces their services on other teams in the name of "consistency". (This can happen a lot with "Architecture", "Branding" and "Testing" teams, sometimes for the better, sometimes for the worse.)
+
+When you work with an external consultancy, there is *always* more [Agency Risk](/tags/Agency-Risk) than with a direct employee. This is because as well as your goals and the employee's goals, there is also the consultancy's goals.
+
+This is a good argument for avoiding consultancies, but sometimes the technical expertise they bring can outweigh this risk.
+
+## Mitigating Agency Risk
+
+Let's look at three common ways to mitigate [Agency Risk](/tags/Agency-Risk): [Monitoring](#monitoring), [Security](#security) and [Goal Alignment](#goal-alignment). Let's start with Monitoring.
+
+### Monitoring
+
+![Mitigating Agency Risk Through Monitoring](/img/generated/risks/agency/monitoring.svg)
+
+A the core of the Principal-Agent Problem is the issue that we _want_ our agents to do work for us so we don't have the responsibility of doing it ourselves. However, we pick up the second-order responsibility of managing the agents instead.
+
+As a result (and as shown in the above diagram), we need to _Monitor_ the agents. The price of mitigating [Agency Risk](/tags/Agency-Risk) this way is that we have to spend time doing the monitoring ([Schedule Risk](/tags/Schedule-Risk)) and we have to understand what the agents are doing ([Complexity Risk](/tags/Complexity-Risk)).
+
+Monitoring of _software process_ agents is an important part of designing reliable systems and it makes perfect sense that this would also apply to _human_ agents too. But for people, the _knowledge of being monitored_ can instil corrective behaviour. This is known as the Hawthorne Effect:
+
+> "The Hawthorne effect (also referred to as the observer effect) is a type of reactivity in which individuals modify an aspect of their behaviour in response to their awareness of being observed." - [Hawthorne Effect, _Wikipedia_](https://en.wikipedia.org/wiki/Hawthorne_effect)
+
+### Security
+
+Security is all about _setting limits_ on agency - both within and outside a system, so when we talk about "Security Risk" we are really talking about a failure to limit agency.
+
+![Related Risks](/img/generated/risks/agency/agency-risks.svg)
+
+_Within_ a system we may wish to prevent our agents from causing accidental (or deliberate) harm but we also have [Agency Risk](/tags/Agency-Risk) from unwanted agents _outside_ the system. So security is also about ensuring that the environment we work in is _safe_ for the good actors to operate in while keeping out the bad actors.
+
+Interestingly, security is handled in very similar ways in all kinds of systems, whether biological, human or institutional:
+
+- **Walls**: defences _around_ the system, to protect its parts from the external environment.
+- **Doors**: ways to get _in_ and _out_ of the system, possibly with _locks_.
+- **Guards**: to make sure only the right things go in and out. (i.e. to try and keep out _bad actors_).
+- **Police**: to defend from _within_ the system against internal [Agency Risk](/tags/Agency-Risk).
+- **Subterfuge**: hiding, camouflage, disguises, pretending to be something else.
+
+These work at various levels in **our own bodies**: our _cells_ have _cell walls_ around them, and _cell membranes_ that act as the guards to allow things in and out. Our _bodies_ have _skin_ to keep the world out, and we have _mouths_, _eyes_, _pores_ and so on to allow things in and out. We have an _immune system_ to act as the police.
+
+**Our societies** work in similar ways: in medieval times, a city would have walls, guards and gates to keep out intruders. Nowadays, we have customs control, borders and passports.
+
+We're waking up to the realisation that our software systems need to work the same way: we have [Firewalls](https://en.wikipedia.org/wiki/Firewall_(computing)) and we lock down _ports_ on servers to ensure there are the minimum number of _doors_ to guard, we _police_ the servers with monitoring tools, and we _guard_ access using passwords and other identification approaches.
+
+![Security as a mitigation for Agency Risk](/img/generated/risks/agency/security-risk.svg)
+
+[Agency Risk](/tags/Agency-Risk) and [Security Risk](Agency-Risk#security) thrive on complexity: the more complex the systems we create, the more opportunities there are for bad actors to insert themselves and extract their own value. The dilemma is, _increasing security_ also means increasing [Complexity Risk](/tags/Complexity-Risk), because secure systems are necessarily more complex than insecure ones.
+
+### Goal Alignment
+
+As we stated at the beginning, [Agency Risk](/tags/Agency-Risk) at any level comes down to differences of [Goals](/tags/Goal) between the different agents, whether they are _people_, _teams_ or _software_.
+
+#### Skin In The Game
+
+If you can _align the goals_ of the agents involved, you can mitigate [Agency Risk](/tags/Agency-Risk). Nassim Nicholas Taleb calls this "skin in the game": that is, the agent is exposed to the same risks as the principal.
+
+> "Which brings us to the largest fragilizer of society, and greatest generator of crises, absence of 'skin in the game.' Some become antifragile at the expense of others by getting the upside (or gains) from volatility, variations, and disorder and exposing others to the downside risks of losses or harm." - [Nassim Nicholas Taleb, _Antifragile_](https://a.co/d/07LfBTI)
+
+Mafia bosses understand this theory well: in order to engender _complete loyalty_ in your soldiers, you threaten their families. Follow the rules or your family gets whacked!
+
+Another example of this is [The Code of Hammurabi](https://en.wikipedia.org/wiki/Code_of_Hammurabi), a Babylonian legal text composed c. 1755–1750 BC. One law states:
+
+> "The death of a homeowner in a house collapse necessitates the death of the house's builder... if the homeowner's son died, the builder's son must die also." - [Code of Hammurabi, _Wikipedia_](https://en.wikipedia.org/wiki/Code_of_Hammurabi#Theories_of_purpose)
+
+Luckily, these kinds of exposure aren't very common on software projects! [Fixed Price Contracts](/thinking/One-Size-Fits-No-One#waterfall) and [Employee Stock Options](https://en.wikipedia.org/wiki/Employee_stock_option) are two exceptions.
+
+#### Needs Theory
+
+David McClelland's Needs Theory suggests that there are two types of skin-in-the-game: the _intrinsic_ interest in the work being done and _extrinsic_ factors such as the recognition, achievement, or personal growth derived from it.
+
+> "Need theory... proposed by psychologist David McClelland, is a motivational model that attempts to explain how the needs for achievement, power, and affiliation affect the actions of people from a managerial context... McClelland stated that we all have these three types of motivation regardless of age, sex, race, or culture. The type of motivation by which each individual is driven derives from their life experiences and the opinions of their culture. " - [Need Theory, _Wikipedia_](https://en.wikipedia.org/wiki/Need_theory)
+
+So one mitigation for [Agency Risk](/tags/Agency-Risk) is therefore to employ these extrinsic factors. For example, by making individuals responsible and rewarded for the success or failure of projects, we can align their personal motivations with those of the project.
+
+> "One key to success in a mission is establishing clear lines of blame." - [Henshaw's Law, _Akin's Laws Of Spacecraft Design_](https://spacecraft.ssl.umd.edu/akins_laws.html)
+
+But _extrinsic motivation_ is a complex, difficult-to-apply tool. In [Map And Territory Risk](/tags/Map-And-Territory-Risk) we will come back to this and look at the various ways in which it can go awry.
+
+![Collective Code Ownership, Individual Responsibility](/img/generated/risks/agency/cco.svg)
+
+Tools like [Pair Programming](https://en.wikipedia.org/wiki/Pair_programming) and [Collective Code Ownership](https://en.wikipedia.org/wiki/Collective_ownership) are about mitigating [Staff Risks](/tags/Staff-Risk) like [Key Person Risk](https://en.wikipedia.org/wiki/Key_person_insurance#Key_person_definition) and [Learning Curve Risk](/tags/Learning-Curve-Risk), but these push in the opposite direction to _individual responsibility_.
+
+This is an important consideration: in adopting _those_ tools, you are necessarily setting aside certain _other_ tools to manage [Agency Risk](/tags/Agency-Risk) as a result.
+
+## Wrapping Up
+
+We've looked at various different shades of [Agency Risk](/tags/Agency-Risk) and three different mitigations for it. [Agency Risk](/tags/Agency-Risk) is a concern at the level of _individual agents_, whether they are processes, people, systems or teams.
+
+So having looked at agents _individually_, it's time to look more closely at [Goals](/tags/Goal), and the [Attendant Risks](/tags/Attendant-Risk) when aligning them amongst multiple agents.
+
+On to [Coordination Risk](/tags/Coordination-Risk)...
+
+
+
\ No newline at end of file
diff --git a/docs/risks/Dependency-Risks/On-Software-Dependencies.md b/docs/risks/Dependency-Risks/On-Software-Dependencies.md
index fce9c98ff..9579dd0a5 100644
--- a/docs/risks/Dependency-Risks/On-Software-Dependencies.md
+++ b/docs/risks/Dependency-Risks/On-Software-Dependencies.md
@@ -275,6 +275,6 @@ Choosing dependencies can be extremely difficult. As we discussed above, the us
> "I suppose it is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail." - [Abraham Maslow, _Toward a Psychology of Being_](https://en.wiktionary.org/wiki/if_all_you_have_is_a_hammer,_everything_looks_like_a_nail)
-Having chosen a dependency, whether or not you end up in a more favourable position risk-wise is going to depend heavily on the quality of the execution and the skill of the implementor. With software dependencies we often have to live with the decisions we make for a long time: _choosing_ the software dependency is far easier than _changing it later_.
+Having chosen a dependency, whether or not you end up in a more favourable position risk-wise is going to depend heavily on the quality of the execution and the skill of the implementer. With software dependencies we often have to live with the decisions we make for a long time: _choosing_ the software dependency is far easier than _changing it later_.
Let's take a closer look at this problem in the section on [Lock-In Risk](/tags/Lock-In-Risk). But first, lets looks at [processes](/tags/Process-Risk).
diff --git a/docs/thinking/De-Risking.md b/docs/thinking/De-Risking.md
index 5be1c79c5..e1a15bd2c 100644
--- a/docs/thinking/De-Risking.md
+++ b/docs/thinking/De-Risking.md
@@ -209,9 +209,9 @@ There is a grey area here, because on the one hand you are [retaining](#retain)
### General Examples
-- **Stop-Loss Trades** are an investment where should the trade start loosing too much money, the trade is closed out, limiting the downside.
+- **Stop-Loss Trades** are an investment where should the trade start losing too much money, the trade is closed out, limiting the downside.
-- **Incident Reporting Plans**: often businesses will have a procedure for dealing with irregular behaviour (such as a cyber attack).
+- **Incident Reporting Plans**: often businesses will have a procedure for dealing with irregular behaviour (such as a cyberattack).
- **Regulations**: firms such as banks or drug companies operate under heavy regulation, designed to control and limit both the amount of risk they take on and that expose their customers to.
diff --git a/package-lock.json b/package-lock.json
index 46f2751ed..2805a9354 100644
--- a/package-lock.json
+++ b/package-lock.json
@@ -3592,24 +3592,6 @@
"@types/ms": "*"
}
},
- "node_modules/@types/eslint": {
- "version": "9.6.0",
- "resolved": "https://registry.npmjs.org/@types/eslint/-/eslint-9.6.0.tgz",
- "integrity": "sha512-gi6WQJ7cHRgZxtkQEoyHMppPjq9Kxo5Tjn2prSKDSmZrCz8TZ3jSRCeTJm+WoM+oB0WG37bRqLzaaU3q7JypGg==",
- "dependencies": {
- "@types/estree": "*",
- "@types/json-schema": "*"
- }
- },
- "node_modules/@types/eslint-scope": {
- "version": "3.7.7",
- "resolved": "https://registry.npmjs.org/@types/eslint-scope/-/eslint-scope-3.7.7.tgz",
- "integrity": "sha512-MzMFlSLBqNF2gcHWO0G1vP/YQyfvrxZ0bF+u7mzUdZ1/xK4A4sru+nraZz5i3iEIk1l1uyicaDVTB4QbbEkAYg==",
- "dependencies": {
- "@types/eslint": "*",
- "@types/estree": "*"
- }
- },
"node_modules/@types/estree": {
"version": "1.0.5",
"resolved": "https://registry.npmjs.org/@types/estree/-/estree-1.0.5.tgz",
@@ -4456,9 +4438,9 @@
}
},
"node_modules/body-parser": {
- "version": "1.20.2",
- "resolved": "https://registry.npmjs.org/body-parser/-/body-parser-1.20.2.tgz",
- "integrity": "sha512-ml9pReCu3M61kGlqoTm2umSXTlRTuGTx0bfYj+uIUKKYycG5NtSbeetV3faSU6R7ajOPw0g/J1PvK4qNy7s5bA==",
+ "version": "1.20.3",
+ "resolved": "https://registry.npmjs.org/body-parser/-/body-parser-1.20.3.tgz",
+ "integrity": "sha512-7rAxByjUMqQ3/bHJy7D6OGXvx/MMc4IqBn/X0fcM1QUcAItpZrBEYhWGem+tzXH90c+G01ypMcYJBO9Y30203g==",
"dependencies": {
"bytes": "3.1.2",
"content-type": "~1.0.5",
@@ -4468,7 +4450,7 @@
"http-errors": "2.0.0",
"iconv-lite": "0.4.24",
"on-finished": "2.4.1",
- "qs": "6.11.0",
+ "qs": "6.13.0",
"raw-body": "2.5.2",
"type-is": "~1.6.18",
"unpipe": "1.0.0"
@@ -5136,9 +5118,9 @@
"integrity": "sha512-Kvp459HrV2FEJ1CAsi1Ku+MY3kasH19TFykTz2xWmMeq6bk2NU3XXvfJ+Q61m0xktWwt+1HSYf3JZsTms3aRJg=="
},
"node_modules/cookie": {
- "version": "0.6.0",
- "resolved": "https://registry.npmjs.org/cookie/-/cookie-0.6.0.tgz",
- "integrity": "sha512-U71cyTamuh1CRNCfpGY6to28lxvNwPG4Guz/EVjgf3Jmzv0vlDp1atT9eS5dDjMYHucpHbWns6Lwf3BKz6svdw==",
+ "version": "0.7.1",
+ "resolved": "https://registry.npmjs.org/cookie/-/cookie-0.7.1.tgz",
+ "integrity": "sha512-6DnInpx7SJ2AK3+CTUE/ZM0vWTUboZCegxhC2xiIydHR9jNuTAASBrfEpHhiGOZw/nX51bHt6YQl8jsGo4y/0w==",
"engines": {
"node": ">= 0.6"
}
@@ -5988,9 +5970,9 @@
}
},
"node_modules/encodeurl": {
- "version": "1.0.2",
- "resolved": "https://registry.npmjs.org/encodeurl/-/encodeurl-1.0.2.tgz",
- "integrity": "sha512-TPJXq8JqFaVYm2CWmPvnP2Iyo4ZSM7/QKcSmuMLDObfpH5fi7RUGmd/rTDf+rut/saiDiQEeVTNgAmJEdAOx0w==",
+ "version": "2.0.0",
+ "resolved": "https://registry.npmjs.org/encodeurl/-/encodeurl-2.0.0.tgz",
+ "integrity": "sha512-Q0n9HRi4m6JuGIV1eFlmvJB7ZEVxu93IrMyiMsGC0lrMJMWzRgx6WGquyfQgZVb31vhGgXnfmPNNXmxnOkRBrg==",
"engines": {
"node": ">= 0.8"
}
@@ -6293,36 +6275,36 @@
}
},
"node_modules/express": {
- "version": "4.19.2",
- "resolved": "https://registry.npmjs.org/express/-/express-4.19.2.tgz",
- "integrity": "sha512-5T6nhjsT+EOMzuck8JjBHARTHfMht0POzlA60WV2pMD3gyXw2LZnZ+ueGdNxG+0calOJcWKbpFcuzLZ91YWq9Q==",
+ "version": "4.21.1",
+ "resolved": "https://registry.npmjs.org/express/-/express-4.21.1.tgz",
+ "integrity": "sha512-YSFlK1Ee0/GC8QaO91tHcDxJiE/X4FbpAyQWkxAvG6AXCuR65YzK8ua6D9hvi/TzUfZMpc+BwuM1IPw8fmQBiQ==",
"dependencies": {
"accepts": "~1.3.8",
"array-flatten": "1.1.1",
- "body-parser": "1.20.2",
+ "body-parser": "1.20.3",
"content-disposition": "0.5.4",
"content-type": "~1.0.4",
- "cookie": "0.6.0",
+ "cookie": "0.7.1",
"cookie-signature": "1.0.6",
"debug": "2.6.9",
"depd": "2.0.0",
- "encodeurl": "~1.0.2",
+ "encodeurl": "~2.0.0",
"escape-html": "~1.0.3",
"etag": "~1.8.1",
- "finalhandler": "1.2.0",
+ "finalhandler": "1.3.1",
"fresh": "0.5.2",
"http-errors": "2.0.0",
- "merge-descriptors": "1.0.1",
+ "merge-descriptors": "1.0.3",
"methods": "~1.1.2",
"on-finished": "2.4.1",
"parseurl": "~1.3.3",
- "path-to-regexp": "0.1.7",
+ "path-to-regexp": "0.1.10",
"proxy-addr": "~2.0.7",
- "qs": "6.11.0",
+ "qs": "6.13.0",
"range-parser": "~1.2.1",
"safe-buffer": "5.2.1",
- "send": "0.18.0",
- "serve-static": "1.15.0",
+ "send": "0.19.0",
+ "serve-static": "1.16.2",
"setprototypeof": "1.2.0",
"statuses": "2.0.1",
"type-is": "~1.6.18",
@@ -6358,9 +6340,9 @@
"integrity": "sha512-Tpp60P6IUJDTuOq/5Z8cdskzJujfwqfOTkrwIwj7IRISpnkJnT6SyJ4PCPnGMoFjC9ddhal5KVIYtAt97ix05A=="
},
"node_modules/express/node_modules/path-to-regexp": {
- "version": "0.1.7",
- "resolved": "https://registry.npmjs.org/path-to-regexp/-/path-to-regexp-0.1.7.tgz",
- "integrity": "sha512-5DFkuoqlv1uYQKxy8omFBeJPQcdoE07Kv2sferDCrAq1ohOU+MSDswDIbnx3YAM60qIOnYa53wBhXW0EbMonrQ=="
+ "version": "0.1.10",
+ "resolved": "https://registry.npmjs.org/path-to-regexp/-/path-to-regexp-0.1.10.tgz",
+ "integrity": "sha512-7lf7qcQidTku0Gu3YDPc8DJ1q7OOucfa/BSsIwjuh56VU7katFvuM8hULfkwB3Fns/rsVF7PwPKVw1sl5KQS9w=="
},
"node_modules/express/node_modules/range-parser": {
"version": "1.2.1",
@@ -6550,12 +6532,12 @@
}
},
"node_modules/finalhandler": {
- "version": "1.2.0",
- "resolved": "https://registry.npmjs.org/finalhandler/-/finalhandler-1.2.0.tgz",
- "integrity": "sha512-5uXcUVftlQMFnWC9qu/svkWv3GTd2PfUhK/3PLkYNAe7FbqJMt3515HaxE6eRL74GdsriiwujiawdaB1BpEISg==",
+ "version": "1.3.1",
+ "resolved": "https://registry.npmjs.org/finalhandler/-/finalhandler-1.3.1.tgz",
+ "integrity": "sha512-6BN9trH7bp3qvnrRyzsBz+g3lZxTNZTbVO2EV1CS0WIcDbawYVdYvGflME/9QP0h0pYlCDBCTjYa9nZzMDpyxQ==",
"dependencies": {
"debug": "2.6.9",
- "encodeurl": "~1.0.2",
+ "encodeurl": "~2.0.0",
"escape-html": "~1.0.3",
"on-finished": "2.4.1",
"parseurl": "~1.3.3",
@@ -8933,9 +8915,12 @@
}
},
"node_modules/merge-descriptors": {
- "version": "1.0.1",
- "resolved": "https://registry.npmjs.org/merge-descriptors/-/merge-descriptors-1.0.1.tgz",
- "integrity": "sha512-cCi6g3/Zr1iqQi6ySbseM1Xvooa98N0w31jzUYrXPX2xqObmFGHJ0tQ5u74H3mVh7wLouTseZyYIq39g8cNp1w=="
+ "version": "1.0.3",
+ "resolved": "https://registry.npmjs.org/merge-descriptors/-/merge-descriptors-1.0.3.tgz",
+ "integrity": "sha512-gaNvAS7TZ897/rVaZ0nMtAyxNyi/pdbjbAwUpFQpN70GqnVfOiXpeUUMKRBmzXaSQ8DdTX4/0ms62r2K+hE6mQ==",
+ "funding": {
+ "url": "https://github.com/sponsors/sindresorhus"
+ }
},
"node_modules/merge-stream": {
"version": "2.0.0",
@@ -12099,11 +12084,11 @@
}
},
"node_modules/qs": {
- "version": "6.11.0",
- "resolved": "https://registry.npmjs.org/qs/-/qs-6.11.0.tgz",
- "integrity": "sha512-MvjoMCJwEarSbUYk5O+nmoSzSutSsTwF85zcHPQ9OrlFoZOYIjaqBAJIqIXjptyD5vThxGq52Xu/MaJzRkIk4Q==",
+ "version": "6.13.0",
+ "resolved": "https://registry.npmjs.org/qs/-/qs-6.13.0.tgz",
+ "integrity": "sha512-+38qI9SOr8tfZ4QmJNplMUxqjbe7LKvvZgWdExBOmd+egZTtjLB67Gu0HRX3u/XOq7UU2Nx6nsjvS16Z9uwfpg==",
"dependencies": {
- "side-channel": "^1.0.4"
+ "side-channel": "^1.0.6"
},
"engines": {
"node": ">=0.6"
@@ -13146,9 +13131,9 @@
}
},
"node_modules/send": {
- "version": "0.18.0",
- "resolved": "https://registry.npmjs.org/send/-/send-0.18.0.tgz",
- "integrity": "sha512-qqWzuOjSFOuqPjFe4NOsMLafToQQwBSOEpS+FwEt3A2V3vKubTquT3vmLTQpFgMXp8AlFWFuP1qKaJZOtPpVXg==",
+ "version": "0.19.0",
+ "resolved": "https://registry.npmjs.org/send/-/send-0.19.0.tgz",
+ "integrity": "sha512-dW41u5VfLXu8SJh5bwRmyYUbAoSB3c9uQh6L8h/KtsFREPWpbX1lrljJo186Jc4nmci/sGUZ9a0a0J2zgfq2hw==",
"dependencies": {
"debug": "2.6.9",
"depd": "2.0.0",
@@ -13181,6 +13166,14 @@
"resolved": "https://registry.npmjs.org/ms/-/ms-2.0.0.tgz",
"integrity": "sha512-Tpp60P6IUJDTuOq/5Z8cdskzJujfwqfOTkrwIwj7IRISpnkJnT6SyJ4PCPnGMoFjC9ddhal5KVIYtAt97ix05A=="
},
+ "node_modules/send/node_modules/encodeurl": {
+ "version": "1.0.2",
+ "resolved": "https://registry.npmjs.org/encodeurl/-/encodeurl-1.0.2.tgz",
+ "integrity": "sha512-TPJXq8JqFaVYm2CWmPvnP2Iyo4ZSM7/QKcSmuMLDObfpH5fi7RUGmd/rTDf+rut/saiDiQEeVTNgAmJEdAOx0w==",
+ "engines": {
+ "node": ">= 0.8"
+ }
+ },
"node_modules/send/node_modules/ms": {
"version": "2.1.3",
"resolved": "https://registry.npmjs.org/ms/-/ms-2.1.3.tgz",
@@ -13293,14 +13286,14 @@
}
},
"node_modules/serve-static": {
- "version": "1.15.0",
- "resolved": "https://registry.npmjs.org/serve-static/-/serve-static-1.15.0.tgz",
- "integrity": "sha512-XGuRDNjXUijsUL0vl6nSD7cwURuzEgglbOaFuZM9g3kwDXOWVTck0jLzjPzGD+TazWbboZYu52/9/XPdUgne9g==",
+ "version": "1.16.2",
+ "resolved": "https://registry.npmjs.org/serve-static/-/serve-static-1.16.2.tgz",
+ "integrity": "sha512-VqpjJZKadQB/PEbEwvFdO43Ax5dFBZ2UECszz8bQ7pi7wt//PWe1P6MN7eCnjsatYtBT6EuiClbjSWP2WrIoTw==",
"dependencies": {
- "encodeurl": "~1.0.2",
+ "encodeurl": "~2.0.0",
"escape-html": "~1.0.3",
"parseurl": "~1.3.3",
- "send": "0.18.0"
+ "send": "0.19.0"
},
"engines": {
"node": ">= 0.8.0"
@@ -14585,11 +14578,10 @@
}
},
"node_modules/webpack": {
- "version": "5.93.0",
- "resolved": "https://registry.npmjs.org/webpack/-/webpack-5.93.0.tgz",
- "integrity": "sha512-Y0m5oEY1LRuwly578VqluorkXbvXKh7U3rLoQCEO04M97ScRr44afGVkI0FQFsXzysk5OgFAxjZAb9rsGQVihA==",
+ "version": "5.95.0",
+ "resolved": "https://registry.npmjs.org/webpack/-/webpack-5.95.0.tgz",
+ "integrity": "sha512-2t3XstrKULz41MNMBF+cJ97TyHdyQ8HCt//pqErqDvNjU9YQBnZxIHa11VXsi7F3mb5/aO2tuDxdeTPdU7xu9Q==",
"dependencies": {
- "@types/eslint-scope": "^3.7.3",
"@types/estree": "^1.0.5",
"@webassemblyjs/ast": "^1.12.1",
"@webassemblyjs/wasm-edit": "^1.12.1",
@@ -14598,7 +14590,7 @@
"acorn-import-attributes": "^1.9.5",
"browserslist": "^4.21.10",
"chrome-trace-event": "^1.0.2",
- "enhanced-resolve": "^5.17.0",
+ "enhanced-resolve": "^5.17.1",
"es-module-lexer": "^1.2.1",
"eslint-scope": "5.1.1",
"events": "^3.2.0",
diff --git a/src/components/FillTheBucket1/index.js b/src/components/FillTheBucket1/index.js
index 3b5d5f824..18d008655 100644
--- a/src/components/FillTheBucket1/index.js
+++ b/src/components/FillTheBucket1/index.js
@@ -44,7 +44,7 @@ const chart1 = (model) => {
return {
type: 'line',
id: '1',
- optons: {
+ options: {
scales: {
y: {
ticks: {
diff --git a/unused_content/complexity/Hierarchies.md b/unused_content/complexity/Hierarchies.md
index ec2030575..ea8db7521 100644
--- a/unused_content/complexity/Hierarchies.md
+++ b/unused_content/complexity/Hierarchies.md
@@ -244,7 +244,7 @@ Subsumptive hierarchies are difficult for a couple of reasons. The first being
As an example of this, let's consider _planets_. The definition of a planet is quite bogus, and has changed over time:
- The Greeks coined _asteres planetai_ to be the class of objects in the sky that moved separately from the rest of the body of stars. Possibly including moons, comets and asteroids. [1](https://en.wikipedia.org/wiki/Definition_of_planet#Planets_in_antiquity).
-- However, after the [Copernican Revolution](https://en.wikipedia.org/wiki/Definition_of_planet#satellites) made the moon a satellite of earth, the defintion of planets seemed to be _bodies orbiting the sun_, and there were just 9 of them: Mercury, Mars, Earth, Venus, Saturn, Jupiter, Uranus, Neptune and Pluto.
+- However, after the [Copernican Revolution](https://en.wikipedia.org/wiki/Definition_of_planet#satellites) made the moon a satellite of earth, the definition of planets seemed to be _bodies orbiting the sun_, and there were just 9 of them: Mercury, Mars, Earth, Venus, Saturn, Jupiter, Uranus, Neptune and Pluto.
- In 2005, [The Discovery of Eris](https://en.wikipedia.org/wiki/Definition_of_planet#Pluto), a body _larger_ than Pluto orbiting in a trans-Neptunian orbit meant that [potentially hundreds of objects](https://en.wikipedia.org/wiki/Trans-Neptunian_object#/media/File:TheTransneptunians_73AU.svg) deserved the term planet.
- In response, Pluto was demoted to being a _dwarf planet_. In order to do this, the definition of planet was changed to include the clause that it had "cleared its neighbourhood" of most other orbiting bodies. This excluded Kuiper-Belt objects such as Pluto, but is _still problematic_, as Alan Stern discusses below.