What Happens When Design Fails

Pursuing the best ethical outcomes in design doesn’t ensure success, and it doesn’t really inspire investors. But as more and more technologists bait themselves and their work for funding frenzies as we celebrate their personal success over their impact, an ethical void has been growing. In it, we’ve fed the most intimate details of our lives; our triumphs and tragedies; our adventures; our fears; our sex; our attention; and all we got in return was a cheap high.

We’re spending twice as much time on our phones as the same time 10 years ago, and since then we’ve witnessed the destabilization of liberal democracy, enabled in-part by a feature on a social media platform being used exactly as it was designed. That’s not a coincidence. So when innovators fail to achieve the best ethical outcomes, we suffer in one way or another. It’s not up to the universities or Silicon Valley to hand us down a tome of curricular wisdom that you need a famous parent or student loan to understand. We’ve always had the option to participate in the creation of better technology by being more selective about the products we adopt, but technology has become an addictive distraction.

In 2019, a small team’s potential impact can no longer be approached with ambivalence. Anyone involved in the creation of a product is taking a stance, and we need technology on our side from the outset. We need to evolve our ways of working to not simply optimize for fairytale success, but craft a profitable design framework for averting dystopian failure.

How can we invite people to participate in the design of better tech, so that the next wave may be more diverse, inclusive, and empowering to human beings?

Angels, sharks, and lone wolves

Last year, the software startup I helped lead design at for a couple years was presented with a juicy opportunity in the form of a kind of challenge against our tech. The client was from a space that raised questions about our corporate ethics, a subject we’d not taken the time to align ourselves on, so before accepting the work our little team met to weigh in, even though it seemed more or less decided that we’d take the contract.

Within moments of what would become a day-long summit, polar differences in opinion began to emerge. One position was that we could hold whatever personal beliefs we like, but that our technology remain agnostic, and available to just about anyone willing to pay for it. The other was that we should avoid the implications that our proximity to a volatile market might have on our ability to attract talent, future work, and sustain long-term growth. You can probably guess on which side I fell.

The intensity of my position was underscored by a personal development of which my team was unaware: my partner and I had learned that she was pregnant just the night before. At the height of the debate, a colleague asked me to clarify whether it was my intention to leave the company if we didn’t reject the contract, and I said yes. Three months later, perhaps against my better judgement, and despite continued efforts to build support in a responsible growth and product strategy, I left.

In the months surrounding my exit, a competitor of the client was raided by the FBI, and shortly after that, the client themselves were forced to discontinue all their social marketing efforts. The challenge disappeared — no harm, no foul — but I couldn’t help feeling like I’d failed. I’d brought a solution to the table too late in the game, lost trust in my team, and failed in building support in something I cared about. If I had designed the newsfeed for Facebook I still would’ve been wracked with guilt for the damage it caused, even if I left before it launched.

My experience proved to me that simply pursuing the best ethical outcomes isn’t the same as designing a good product. A company may succeed even while design does not, and once a company believes in the power of that failure, the moment to right the ship may never return. So where does that responsibility begin?

Any idiot can build a system

Yonatan Zunger, a former Google engineer said to NPR, “Any idiot can build a system. Professionals think about how a system will fail. It’s very common for people to think about how a system will work if it’s used the way they imagine it, but they don’t think about how that system might work if it were used by a bad actor, or it could be used by just a perfectly ordinary person who’s just a little different from what the person designing it is like.”

Understanding how to make someone feel good is part of designing a good product. You model interactions to reinforce existing behavior or reward ones you’d like someone to develop. This is one of the ways success metrics are articulated within product teams and lay the groundwork for guiding new features that increase retention. But qualifying good and bad user behavior, and subsequently quantitative success and growth, is very much a subjective exercise. Pharmaceutical drugs also have a high rate of retention, which is objectively good for business, but retention and addiction look pretty different depending on your motivations.

Everything that’s happened in the last two years has made it clear that even small tech teams can wield outsized power, and that the potential for their impact to be exploited will always be difficult to thwart. If you found out the core philosophy of any regulated industry was to “move fast” and “break things” you wouldn’t need a theoretical ethicist to tell you that kind of thinking is gross. But rebelling flies in the face of the mythologies that surround success in tech. When my coworker asked me whether I would quit, it was the treatment of ethics as a hinderance to growth that inspired so much frustration.

So as technology becomes more integrated into not just our digital lives, but our public spaces and political institutions, it has become the responsibility of technologists to consider their work as a representation of their values. Not taking action is an action.

The universities will not save us

In the absence of an ethical framework, designers and engineers are often indoctrinated into a culture that is incompatible with the principle function of technology; to enable us to fix more things, more quickly. It’s a school of thinking glorified by the most influential voices in tech, where the terms “pwned” or “move fast and break things” are spoken like passages from a holy text. It’s a vocabulary laden with shrouded bias, apathy, and lack of foresight.

In his address to Princeton about the social implications of artificial intelligence, President of Microsoft, Brad Smith called for a “Hippocratic oath” between technologists to do no harm with new developments in the space. He stressed the importance of asking, “not what computers can do, but what computers should do.”

Earlier this year, Stanford announced that it too would be adding ethics into its academic programs, in response to criticism that the industry it engendered has lost its way. Marc Tessier-Lavigne, the Silicon Valley university’s President responded to the Financial Times, “Maybe some forethought seven to 10 years ago would have been helpful”, in regard to Facebook’s privacy woes. Some forethought that Stanford had just last year as well, apparently.

The brunt of the ethical curriculum at Stanford and other leading US technological universities will focus on artificial intelligence and machine learning in an attempt to forecast and counter the damages of today’s technology. Stanford’s research will require fundraising with potential scope for support from companies and philanthropists, some or all of which will undoubtedly have some — let’s say “skin” — in the game.

While ethical standards are clearly necessary and needed, I find it hard to believe that we’re dependent on universities to identify them. Undergraduates need behavioral psychology classes as much as they need ethics classes, so that they might better forecast the ramifications of what they break. If we want to improve our relationship with technology, we can start by orienting product goals around human empowerment, rather than driving dependent behavior.

An enlightened approach to design

So how might we build social responsibility into our design methodologies in a way that’s compatible with the motivations of investors? We can start by identifying the limitations of our methodologies and bring the work closer to the end user, empowering them to participate in the next evolution of consumer technology.

In a public lecture on ethics and technology at the World Intellectual Property Organization (WIPO), Australian moral philosopher, Peter Singer said this:

“We shouldn’t assume that evolution is guided by some kind of providence to reach the best ethical outcomes. We could imagine better outcomes: more intelligent, altruistic and compassionate humans, for example. Maybe that’s what we need to do to protect the future of humanity.”

In the 1960 and 70s, following waves of Western social and political reform, Scandinavian Unions developed a methodology called participatory design. It’s a familiar practice to planners and architects, who utilize it in the development of built environments, and is known in the US as co-op design. These methodologies were engendered by the social movements of that era, that argued involving people more directly in the design process would produce the best outcomes, both for the business of the project and its end user.

Unlike user-centered design, participatory design evolved out of a desire to democratize placemaking. It turned design into a community initiative led by multiple stakeholders, and powered by community participation, growth, and regeneration. Under a framework like this, design becomes a learning mechanism for building meaningful relationships, and makes knowledge accessible by allowing complete transparency in communication and decision making. It’s impactful in practice, and in output. It demystifies the process of design by welcoming diverse perspectives into a fluid process that respects human agency, curiosity, and community.

Imagining better outcomes

The forces at work here and the conflicting motivations of innovators and investors make all of this far from a trivial change. The first and most obvious conflict is that innovation is expensive, and earning a degree to be knowledgeable about innovation is usually expensive, but also profoundly profitable. Which means there’s this gravitational pull between tech and venture capitalism, but ultimately tech is more affected by the forces of profitability than it is by cultural or social factors, which is where I think the responsibility of designers enters. Design keeps technology and finance from crashing into one another by caring about those human questions.

It’s clear that profit is not the same as success. Profoundly profitable businesses fail all the time when they don’t care about their impact. US banks in 2008 are a prime example. So what difference does it make if occasionally I run at a monetary deficit if I have a consistent impact surplus? I’m not an economist, but surely the cumulative cost of these mistakes begs the question: how safe is it to let technology be led by private interests when a business or product can appear to be successful, but utterly fail to serve the needs and wellbeing of people, small businesses, the environment, and our politics?

We can’t afford to wait for anyone to tell us, but all technology could be markedly improved by doing as Peter Singer urges: to simply imagine more intelligent, altruistic, and compassionate humans, and design for their needs, rather than the needs of those who probably don’t even use the technology in the first place.

read original article here