De Platform The Internet

16 Jan 2021 - Christina Eichelkraut , Elio Grieco

While the internet is inarguably the greatest invention in human telecommunication to date, its early idealism and utopian promise has given way to a profit-driven, algorithmically manipulated echo chamber that is starting to cause harm in the real world. How did we get here? More importantly, how can we restore the internet’s promise?

Updated: Jan 20, 2021

A yellow unplugged ethernet cord

Photo by Markus Spiske You can find his work on @markusspiske

The real-life gore wrought by algorithms

The internet began as an experiment in communication resiliency and freedom. At first, there wasn’t enough online content to persuade people to open their wallets and pay for access. Despite this, commercial interests still saw a profit opportunity. Before long, the lofty ideals upon which the internet was built were overtaken by the relentless pursuit of the all-mighty dollar.

Internet users first saw this manifest as online ads, initially just little banners at the top or on the side of pages, relatively unobtrusive. Soon after advertisers began to target ads to specific demographics, and even individual users, based on their digital crumb trails. As these companies grew from ad revenue their appetite for information about us became insatiable. The screens we stared into began to stare back at us.

This led to another disturbing trend, “maximizing engagement.” The longer a user stares at the screen, the more ads they can be shown. The race to figure out how to keep users glued to their devices was on. The answer was disheartening and increasingly appears to have become dangerous: algorithms revealed the best way to keep users engaged was not with material that gave people joy, but rather content that enraged them. People did, indeed, spend more of their time staring at screens. But they also spent more time engaging in anxiety- or rage-inducing debates, sometimes with family, other times with friends. As internet users clicked on ads and bought more things friendships and even family ties began to deteriorate into vitriolic debates and exposure to even more self-reinforcing content or groups.

As we moved fast and broke things, it was more than just technical and business paradigms that suffered. These algorithms, in their unending quest to maximize engagement and total lack of better judgment, lead formerly normal members of the public down ever deeper rabbit holes. As they became more and more addicted to the dopamine hit of likes and shares, they became extensions of the algorithm, shedding their humanity click by click to become essentially the hands of the robot. Eventually, this culminated in a riot that endangered the nation’s lawmakers.

Algorithms were a definitive cause of the Capitol attack. As dissatisfaction with the elections’ results fomented, Facebook gave rioters tips and suggestions that maximized the profit of Facebook and advertisers right up until the final moments. Specifically, this included Facebook advertising paramilitary gear on the pages of users who used the #StoptheSteal hashtag.

Companies thrived on a never-ending stream of passive income. Meanwhile, our social fabric began to fray.

Communications Decency Act Section 230

The now infamous section 230 of the Communication Decency Act basically says platforms are not legally liable for the actions of their users. While this was unnecessary in a world where people ran software directly on the computers in their own homes, the ubiquitous use of Cloud™ services has changed all that. On the Cloud™, the software runs as a service in some far away datacenter. As a result, it became necessary to protect the owner of the data center. After all, it’s not reasonable for them to be held criminally liable for the actions of the millions or billions of users using the software on their servers. We all make mistakes and it’s impossible that millions of people will all behave.

While many, including the authors of this article, think that a move away from today’s massive cloud services could be a good thing, that will take time. Until the majority of users have moved on to better, decentralized systems, revoking section 230 of the CDA would have immediate and catastrophic consequences. Though there have been many attempts to revoke section 230, thankfully none have succeeded.

Centralization and Censorship

For decades, platforms have been left to police themselves. Even the most pro-free speech platforms moderate to keep illegal content such as credible threats, sex trafficking information and child pornography off their sites.

Then the Capitol was invaded by a protest-turned-riot and President Donald Trump was banned from Facebook and Twitter, the first world leader to be banned from social media. Meanwhile, Parler, the indie social media platform that became a refuge of the alt-right, was kicked off Amazon’s servers, a death knell blow after Google and Amazon had already kicked its app out of their respective stores.

All of this led to a shift in the public mood. Prior to recent events, the majority of the public was content to ignore platforms banning the speech of marginalized groups. Suddenly, a very vocal group was made acutely aware of the dangers of centralized systems and their immense power to arbitrary control content. The rapid and wide-scale bans of many right-wing figures, platforms, and concepts became a wake-up call. While censoring Trump and de-platforming Parler may have been necessary in the short term to prevent violence, the long-term consequences are worth considering.

The subsequent national hand-wringing over the role and power of corporations in public discourse has thrown the dangers of internet centralization into sharp relief. We can no longer as a society deny that corporations, not the people’s representatives in government, are the real arbiters of free speech. We are setting a precedent of accepting this as normal when in fact it’s appalling.

Protocols over Platforms

It’s well past time we return to the ideas and structure of the early internet when people owned their computers and had direct control of their data. There was no conflict between those who control the data and their motivations and those who provided the data.

Today, people’s data is controlled by others, whether a cloud service provider or the personal tidbits your smart fridge or Facebook profile collect. The user’s motivations are hyperpersonal, such as having your bedroom at the perfect temperature or showing off your child’s swim meet trophy. In contrast, the companies want this data to show you ads for a better smart thermostat or children’s swimwear.

It is possible to return to a federated internet. Instead of a platform, which is a specific piece of code that usually runs only one server or set of servers, like Google apps or social media platforms like Facebook, users could instead choose what server to log onto or even run their own server.

That would require using federated services, like Mastadon for social media or Proton for email. It is impossible to get banned from these services because if you are booted from one server you can simply use a different one. Or, if you have your own server, aside from other users blocking you, it would be up to you. External companies no longer control your access to the service. That’s precisely why marginalized or persecuted communities, particularly in countries where their government is hostile to them, aggregate on these spaces.

However, servers still need humans to administer them. Ideally, this would not require a computer science degree or IT certification. This technology exists but is still rough around the edges and generally only accessible to programmers.

It’s even possible to go a step beyond federated services through distributed systems. In a distributed system, programs are run on an individual computer that exchanges data with other computers that have the same program.

For example, say a person and their spouse both use Syncthing to share photos. Unlike Google Drive, where the user’s personal data and photos are on servers owned and operated by Google, the Syncthing shared photos are stored only on the user’s computer. They are in complete control of who else can see the photo. If the photo is shared with another person it is only stored on that person’s device. There is no corporate server in an unknown location in control of the personal photos. The user did not have to “sign up” and give a private entity personal information to share photos they own. The one catch is both users have to have Syncthing. If the photo owner is technologically inclined the owner can tinker with open-source software all they want.

Distributed technologies give users more reliability, more security, better privacy and ultimately more autonomy over personal data.

Now is the time

The corporate erosion of the digital public marketplace has been going on for years. Recent events, however disheartening and frightening, can hopefully serve as a catalyst for fundamental reform. The internet needs to once again become a tool for the people, rather than people being a product of the internet.