Amber: Of those options, I think it’s most like a human to a car. The AI that we create is really a dumb tool, currently with no agency or moral responsibility of its own accord. Still, the tools are ethically relevant as soon as they are used. Is the car used to visit your sick grandma? Or is the car a weapon in a terror attack? Perhaps the car wasn’t being used as a weapon, but did accidentally kill someone in a collision. All these circumstances carry ethical implications. In the case of AI snafus, like Google’s labelling Black Americans “gorillas,” or word2vec propagating gender stereotypes, more often than not our use of the tool is accidentally causing harm.
What’s scary and scandalous about this, is that we internet users continue refueling the cars with our data, but let tech monopolies drive the cars as they please. The dWeb may help AI ethics by putting us users behind the wheel, and letting us make those ethical judgment calls for ourselves.
Amber: Trying to conceptualize the dWeb can really trip people up because it is so simple, that it’s easy to feel like you just don’t “get it,” like you’re missing something. But the reality is that the dWeb is a revisiting of the vintage internet in which the idea was simply to connect a bunch of computers around the globe. Originally, it was personal devices that served as the backbone of the internet. Way back in the day, if I wanted my files to be accessible online, I actually had to be logged online via dialup, so others could access those files from my personal computer.
Fast forward, and we’ve witnessed the rise of server farms to uphold tremendous surveillance capitalism industries. In short, it’s rare that people store their own data and serve their own sites. They simply use Google search, Facebook, LinkedIn, have a WordPress site, etc. It is true that these services each provided convenience in a way–they offered user experiences that we all enjoyed.
The dWeb, however, is about bringing those same conveniences and user interface experiences together with that vintage internet under-the-hood. It’s about turning today’s average online consumer into the average online prosumer: one who creates bandwidth, storage, and computation as their devices connect to the web of everyone else’s devices.
David: Why and how would a decentralized internet be more sustainable than our current internet?
Amber: The dWeb is more sustainable because it is more efficient. I mean this both in terms of hardware, and in economic terms. In terms of hardware, imagine if each of the devices that are lying dormant were suddenly to serve us bandwidth and store our files. We’re talking laptops, cell phones, smart fridges, old Xboxes, FitBits, SleepNumber beds (which are extremely disturbing in their surveillance BTW!). If all of these things connected online to serve our browsing and streaming and shopping experiences, their combined power would dwarf the resources of the tech monopolies. So we can add massive storage and computational power to the world, using what we already have lying around.
But the dWeb is also more efficient in terms of economics. If we bring these devices online and encrypt our data such that only the people we want to access it can access it, the internet of things is sufficiently powerful to keep our information (think posts and sites) accessible to the right people even when our devices go offline. There’s no need to keep AWS on hand. Furthermore, since our data is safely encrypted, companies can’t mine it without our explicit permission. It’s a game-changer for tech companies which will have to pivot away from targeting ads. It will force companies to align their long-term incentives with their users in order to find a working business model. And that nonzero collaboration is simply more sustainable.
David: How will dWeb merge social media and the rest of the web?
Amber: It’s important to keep in mind how basic and natural the dWeb is.
This question is sort of like asking, “how will your identity merge with the rest of your interactions in the world?” What is bizarre is not the merging of social media and the rest of the web, but rather the current system which somehow compartmentalizes our identity and relationships into just a handful of sites we call “social media.” Our relationships and our identities are the foundation from which we interact with and interpret the world. It is strange indeed that we need to set up profiles, import photos, and find friends every time we join a new social media site. There is no analogous process to this when we go to a party, arrive at work, or travel to a new destination. Our selves, and our memories and connections with others just show up with us. They are a fundamental part of us. We can’t leave home without them. The dWeb, likewise, means that each person is their own person and their own nexus of relationships–no need to log in, to create accounts, or to describe themselves.
And this comes with a lot of cool perks that aren’t talked about much. In the real world, who we are and who we spend time with impacts more than who we share pictures and life updates with. Who we are and who we spend time with impacts our very worldviews. The dWeb is about so much more than privacy. It’s about pushing questions of “truth” and “fairness” back into the hands of average internet users, such that they and their trusted peers decipher the veracity of a news article, or choose how algorithms function and make decisions (in search, ads, health care, what have you).
Amber: I think this is a savvy move for Twitter–although they aren’t pouring enough resources into it. In reality, they stand to make more money than before if they are able to become the first big mover of dWeb social media protocols. Although they cannot mine data in a decentralized setting (except for users who choose to share their keys with Twitter), they can make a boatload of money by pivoting to charge for bandwidth provision (sort of like a telecom company).
Specifically, we will see more and more confidence intervals accompanying things like identities (is this the real Elon Musk?) and news articles (is this fake news?). These confidence intervals will be unique to each person, based on their social networks and their own specified settings online. Data mining will be done on a personal level, such that each person’s dWeb presence is calculating insights and making decisions on the basis of their own prior data, and the data of others who have shared with them (i.e. movie recommendations based on similar watch histories among friends rather than strangers).
Advertising models will be flipped on their heads–such that users advertise to companies, rather than companies advertising to users. Audits of common protocols will be taken for granted, such that running a Google search will return not only results based on your own web of trust, but also information about how your search results would look different as you adopted larger or smaller pools of trust. Those are some of the things we would regularly see and interact with.
All these changes would lead to shifts in ethical conversations about the internet, too. Privacy, for instance, would be a much smaller issue than it currently is, since data would be private by default. Conversations about algorithmic fairness would shift as well. Instead of us relying on one or two search algorithms as a whole, on the dWeb, we will choose which algorithms we want to rely on, and those algorithms will work within the parameters of our social networks to automate decisions.
This means that users wield a lot of moral responsibility–what sorts of speech and images will they allow themselves to be exposed to? If they live in a particularly sexist bubble of the country, are they going to expand their webs of trust to counteract that bias? Are they okay consuming and sharing content which is protected by IP law, now that that government doesn’t have a way to know?
David: It terms of algorithmic bias, do you think forcing social networks to go open source could have a larger impact than trying to have other decentralized social networks reach critical mass from the ground up? It’s not mutually exclusive. But if I were trying to solve the problem of why is this in my newsfeed right now? I think reading how the existing code functions in laymen’s terms could be a more straightforward route to identifying algorithmic bias.
Amber: I think that open-sourcing a code base is a step in the right direction. Still, just because the code is open-sourced doesn’t mean that the problems will indeed be fixed. Facebook, Google, Amazon–they all have their own agendas and interests to serve, so it is unlikely that they will immediately hop to correcting unfair algorithms in their systems. And their network effects are so large that people aren’t going to shift to a better codebase version overnight.
More importantly, though, fairness in AI is an intractable problem, because ideas about what is fair vary widely. As
I’ve written before, ideas of what is fair are algorithmically incompatible. It is akin to the tradeoff between distributive versus procedural justice.
What is more fair: men and women must meet the same algorithmic decision thresholds? Or the algorithm’s predictive performance must be equal for men and women? You can’t have both. The dWeb takes a moral stance on this. It says: each person gets to choose which definition of fairness to implement, and under which circumstances. Open-sourcing Facebook and Google doesn’t accomplish that.
David: What are future digital inflection points that when you see them live, you’ll think, “oh of course the web is decentralized now”?
Amber: When Steve Jobs pushed the idea of a personal computer, people didn’t understand why anyone but corporations would need them. They already had typewriters and filing cabinets. The same thing is happening now with platforms. People don’t really understand why we’d want to be our own Twitter or Reddit, when we’ve already got a one-size-fits-all version we connect to via telecom monopolies.
We are past the personal computing digital inflection point now, though I am not sure exactly where that inflection point happened. I presume that we’ll look back on the centralized web similarly–wondering why so few people saw the value of personal sovereignty–and all the personalized data science, privacy, algorithmic control, and truth-deciphering that goes with it.
Amber: It’s fascinating to think about advertising on the dWeb. Shifts in advertising will most likely happen gradually. As data becomes more private, I think we’ll see content-relevant advertising like Hacker Noon’s increase again. Over the long term, however, the dWeb will likely flip advertising on its head–such that people are advertising their needs and desires to companies. For instance, a person who wishes to get in better shape may choose to share this fact publicly (in essence, advertise this goal to the world), and allow gyms, sporting leagues, and sporting goods stores to advertise products that nudge them in the right direction (and perhaps even get paid a small fee for receiving those ads). So instead of ads being targeted in a predatory way, we’ll target ourselves for ads by opting into ones we’ve chosen to aid us toward our goals rather than luring us into impulse buys.
David: Very cool to think about users collecting your own referral fees for when they become leads for various goods and services. If you could make one thing about how the internet works illegal, what would it be? For reference, mine would be banning no reply email addresses.
I would make it illegal for a corporation to take any action that they disallow their users from taking.
David: What are actionable and simple ways to be a more responsible internet user?