June 10th 2020
Software engineer with a curiosity for behavioral economics
Public Policy is quite an obscure profession. It hides in plain sight. Most people don’t realize that there’s a piece of sophisticated machinery that constantly shapes the world they live in — a system under but also beyond politics. So let’s define it first. Public policy is any state-initiated and state-executed intervention to get a certain outcome. This outcome could be a lot of things — increase in literacy, reduced corruption, increase in exports, people using products made within the country, and so on. All of them, in theory, have the long term good of the citizens as an ever-present overarch.
Policy is an application of Economics. It’s about changing few variables in a system to get a certain outcome. But most of the times, such a single voluntary change causes a hundred unwanted changes in effect. That definitely sounds like software engineering. Both policymakers and software engineers deal with systems that tend to do more than what’s asked. Unintended consequences tie these fields together.
But policy is more complicated than software engineering, because it has to work on human beings. Programming a computer whose whole purpose is to obey you and do what’s asked is hard enough. Imagine programming thousands of computers at once, but this time the computers don’t even listen to you, they think they know what they need and are least bothered by your instructions. Software engineers work with deterministic systems but policy professionals do not and that makes their job much more challenging. (Assuming free will, and that humans aren’t systems which are extremely complex, but deterministic nonetheless.)
Play the long game
Economic reforms take time. Even a small economic reform takes sustained time and effort. Its effects roll out in phases and need to be constantly monitored. Something that seems to be working out in the short-term could backfire in the long-term. A policy that’s not beneficial in the long-term is a failure. As Kelkar and Shah put it, “it’s a test match, not IPL.”
In software engineering, when you get your code working, it’s tempting to consider the work done and move on. But it’s important to understand that the code doing what it was supposed to do, is not a sufficient condition for completion, it is only a short-term result. The long-term results, in this context, would take in to account how the code responds to modifications, is it understandable, extendable and maintainable. When you think of getting long-term results, you will think of including the so-called boring things in your day-to-day work: structuring the code well, leaving useful comments and writing documentation. As it is for the policy professionals, so it is for software engineers — a short term solution is no solution.
Do the boring things.
Test your solutions
Policymaking isn’t considered a hard science mainly because of the difficulty in reproducing results. Due to the large number of variables in human societies, what works in a certain locality won’t work in its neighborhood. So, no matter how good a policy sounds in theory, it needs to be tested. Policy people do this with Randomized Control Trials (RCT). In these trials, policy is implemented on a small group of people, the impact is measured and compared with another group on which the policy was not implemented. This helps in determining the effectiveness of an intervention. These tests are necessary because it’s impossible to accurately predict how a policy might turn out. The 2019 Nobel Prize in Economics awarded to Abhijit Banerjee and team was considered to be a validation of the use of RCTs in policy.
In the software context, such trials become a key process in assessing UX. What we, as engineers of the user experience consider optimal, might be far from the truth. Evidence is better than intuition, do A/B testing, let your users tell you what they prefer. To quote Edward Deming, “In God we trust, all others must bring data”.
Treat the disease, not the symptom
This again is a manifestation of the complexity of the world. A seen effect could be a result of unseen causes. The black smoke in the sky could be from burning of tires by vandals who have nothing else to do because they’re unemployed because new jobs aren’t coming in because the industries are steering clear of your state because the government has mandated a high reservation for locals. The disease here happens to be a botched policy itself and treating the disease means its reversal. Everything else is just treating the symptom. Curbing the sale of tires will not bring clear skies.
Likewise, bugs in software are mostly symptoms and not diseases. Taking care of the symptom is primary, yes, but the underlying cause shouldn’t be avoided. In an application, frequently reappearing class of bugs indicate a lack of structure or guidelines for the development to adhere to. Unless that foundation is built, just fixing the bugs only delays the inevitable.
The bureaucracy that implements policy is a complex web. Lot of hands get involved in bringing out a policy. The intentions of the policy designers sometimes cannot be communicated to the ones implementing it. This gap demands directives to make sure that the details of implementation are airtight and nothing is left to interpretation. And this extra work starts with realizing that what can go wrong, will go wrong.
In the context of software engineering, this is the aspect of security. Security features of a software should assume that the worst hackers are out to get them and should be designed as such. But security doesn’t just cover external factors. The software should be safe even from fellow collaborators. A developer having a bad day and making poor choices that they generally would not, is a very real possibility. These mistakes would ideally be spotted in peer code reviews. Even if they escape reviews, bad decisions shouldn’t domino through the system. Contagious modules should get auto quarantined. This requires preemptive identification of fragile or vulnerable portions of the code-base and having tightly regulated access to them. If a system expects the participants to be vigilant and have an impeccable foresight all the time, it’s fragile. Build robust software resistant to bad actors — internal and external.
Have skin in the game
The lessons till now were based on what makes policy work effectively. But we can learn more from its points of failure.
It’s difficult to devise policies which change people’s lives for good while being insulated from their impact. Most policymakers are subject to a very real disconnection from the people they target. Neither success nor failure of a policy generally make its way back to them. This is an avenue where software engineers can radically better the policy professionals.
In software engineering, this problem of connection boils down to the user feedback loop — how close are the builders of a system with its users. Engineers are generally well placed in the loop but the one practice which would seal the loop in all the right places would be Support Driven Development. Kevin Hale, who coined the term, describes it like this — “what we’re trying to figure out is how to inject values like responsibility, accountability, humility, and modesty into software development. It’s a way of creating high-quality software. All you have to do is make everyone do customer support. What you end up having is, you fix the feedback. The people who built the software are the ones supporting it”. With this, you introduce the thought — “If I’m going to build this, how does it affect me later when I have to support the user?” in the developer, and with that, some skin is pulled into the game. NN Taleb would approve.
Economics is a treasure trove of insights on human behavior. For the amount of time we spend with other humans, we spend a disproportionate amount understanding them. Hopefully, this essay has provided — to software engineers in particular — good reasons to explore Economics so that you do what you do, just a little bit better.