A Practical Guide to Systems Thinking

To solve a problem, you first need to understand the problem.

Problems exist within systems.

So, to understand the problem, you need to understand the system.

By systems I mean complex adaptive systems. They are the most interesting kind, and difficult to deal with.

Complex, because removing a part destroys the system. Each part is interdependent on other parts. Like removing a heart from the human body.

Adaptive, because each part reacts to what the other parts do. Like the heart pumping faster when artery blockages develop.

Treating complex systems like a black-box, where you figure out the problem by trying solutions is a terrible idea. Imagine going to a doctor and without listening to you, the doctor gives you aspirin. Do you feel better? No? Then paracetamol. Still no? An antacid. Still no? .. Sigh, okay, tell me what’s wrong?

But that’s how we solve most surface problems. And in simple systems, it usually works.

The key is, can you distinguish between simple and complex systems?

If you don’t know what to call it, you can’t distinguish it. [1]

An alternative is white-box thinking. Here, we dissect the system, get into its internals and figure out why things weren’t working, or how the system works.

I think there are at least two ways to figure out a system.

  1. Simulate the system
  2. Experiment with the system

Simulation

Simulation is either computational or thought based.

One is running a computer simulation,
 the other is running a thought experiment.

In both cases, what you get are insights into how the system works. They go hand in hand.

Both need a basic understanding of how things will work — without an existing model, you can’t simulate. These methods are also used to refine your model. For example, you start with a barebones model built by experience. Then, you run it. Then, you improve the model based on the difference in results between your model and the world.

One pillar example of simulation is The Limits to Growth done in 1972. It tracks the demise of humans on the current trajectory of consumption and environment degradation.

Criticized in the beginning, LTG has come to be accepted as the model predicting reality, now that we have 50 years of living the model behind us.

Thought experiments

Take a hypothesis, theory or a principle. Think through what will happen if it were true. That’s a thought experiment.

Consider an example — Schrödinger’s cat (1935). It presents a cat that is indeterminately alive or dead, depending on a random quantum event. Figuring out if it’s alive or not is dependent on looking in the box. Till that point, it’s both, alive and dead.

What makes this thought experiment so great is the way it explains quantum theory. Say, instead, you had a coin inside the box. You can toss this coin by pressing a button outside the box. If it’s heads, the cat will die. Tails, and it lives. With every button press, the probability to live decreases, but at no point is that cat both alive and dead. Is your mind blowing to pieces yet?

( ANDRZEJ WOJCICKI/ Science Photo Library/ Corbis)

The process

You begin with a set of hypothesis about what would happen.
 You run through them in your mind

There are two things that might happen.
 You arrive at an outcome different from what you expect
 You arrive at your version of reality

Both cases are a cause for concern.
 You confirm that’s what would happen in the real world, via experience
 If your model is close enough to the real world, profit.
 else you refine your hypothesis and restart.

Experimentation

Here, we enter the territory of real world experiments with real world consequences that might change the state of the system.

You’re perturbing the system to try and figure out what will happen. Thus, you want to experiment with small changes first. [2]

Want universal basic income? Try experimenting in a few cities first. it’s A/B testing on whatever scale works.

The only caveat: things break when you scale up.

Experimentation with system is needed when your imagination can’t keep up with what you’re seeing. This usually happens in complex systems. There are too many variables — you don’t know half of them, and you can’t keep the other half in your brain. [3]

read original article here