Photo Credit: Twin Design/Shutterstock.com

The 2010s will likely be remembered as the decade of the rise of platforms. Google, Amazon, Facebook, Airbnb, Uber — all of these companies have become more than just billion-dollar businesses. Over the last ten years they have started to play an essential role in the everyday life of most people. We increasingly rely on platforms and their services for our social, professional, and political needs. This, of course, is what separates them from “normal companies.” Platforms tend to monopolize aspects of life — such as socializing, dating, room-letting, searching — and make a profit by utilizing their monopoly. But what distinguishes Google, Facebook, Airbnb, and Uber from other companies is not just their economics. How these platforms work, how they regulate their services, and how they manage their users is a fundamentally political matter.

Currently, there are two prominent lines of debate about the political implications of platforms and their algorithmic models: the black-box and the post-truth debates, explained below. Both discussions highlight important problems, but by focusing on the need for greater transparency and the dissolution of the public sphere respectively, they also distract from more urgent political concerns. Instead, I suggest we focus on the forms of power platforms and their algorithms exercise. Following Michel Foucault, they may be understood to employ a specific form of biopower, and should therefore be understood as biopolitical companies. What this means is that these companies have the means and the intention to govern populations. The biopolitical quality of these companies puts them at odds with democratic states that were historically thought to be the primary loci of biopower. Whereas the rise of platforms has consistently been greeted with enthusiasm for their democratizing potential, it may now be time to start to be concerned about the power platforms wield and what their power means for democratic states.

Two lines of debate on the politics of platforms

One debate on the politics of platforms — the black-box debate — revolves around the fact that platforms and their algorithmic models mainly operate in the dark. Even if you could understand the complex and arcane workings of a platform’s algorithmic model, it is likely safeguarded by intellectual property laws. This is especially problematic since many platforms and their models have gained what Tarleton Gillespie in his research on algorithms has called “public relevance,” that is, the actions and decisions of these models affect so many people that they are of interest to society in general.

This argument is perfectly illustrated in an exchange French President Emmanuel Macron had with journalist Nicholas Thompson in an interview for WIRED earlier this year. Asked about the implications of algorithmic ways of decision-making, Macron warned that “AI could totally jeopardize democracy” and thus demanded that “you have to create the conditions of fairness of the algorithm and of its full transparency.” He explicitly demanded this as the democratically-elected leader of his country. “I have to be confident for my people,” Macron argued, “that there is no bias, at least no unfair bias, in this algorithm.” Following this line of thought, we have to struggle for transparent, fair, and ethical standards in the design of algorithmic models, and need to force platforms to adhere to standards of transparency and fairness in the design of their services.

The second political problematic of platforms is their influence on public debate. Popular concepts like “filter bubble” or “echo chamber” point to and denounce the way platforms fragment, and ultimately breakdown the public sphere. For example, widespread algorithmic models like those used by Google or Facebook confine citizens to a distinct informational space that resembles the public sphere but which presents them only with a limited set of ideas and arguments. This is said to eventually facilitate a “post-truth” era, where the very foundation for reasonable public debate disappears and democracy becomes a question of strategic misinformation and deception. One does not need to share the stress that scholars of deliberative democracy put on the public sphere in order to find this troubling. Democracy seems unthinkable without a more or less rational debate and the possibility for citizens to make informed choices. As the algorithmic models of platforms increasingly decide on trending topics, public relevance and — apparently — “truth,” they undermine public debate, the democratic state, and ultimately democracy itself.

The ideology of transparency and the authority of truth

Both debates on the political significance of platforms are important, but both inadequately address the question of power. The black-box argument boils down to the idea that the power of algorithmic models will automatically disappear when transparency comes into play. This takes up a popular misconception when it comes to power, namely that power loses its momentum when it becomes transparent and ethical, and that power is only power when it is opaque and corrupt. Indeed, while transparent and openly accessible algorithmic models may allow some people to understand their workings, this wrongly implies that the problem of algorithmic power is essentially a problem of opacity. Transparency may be a solution in some cases of bias or fraud. But it is not necessarily a threat to the power of platforms. A platform with a perfectly transparent and fair model could still exercise power, and we might still find this power worthy of critique.

Secondly, even though the dissolution of the public sphere is an urgent concern, the problem is wrongly stated by concepts such as filter bubble or post-truth. These concepts implicitly call for an authoritarian management of information flows, hate speech, misinformation, and — ultimately — truth itself. While we are usually and rightly skeptical to concede such an important role to anyone, let alone the state, for fear of power abuse, we readily concede it to private companies such as Facebook or Google. Facebook as guardian of truth is not just a distressing prospect — it confuses the real issue: why and how have platforms gained this powerful position in the first place?

What nudging and platform capitalism tell us about the power of platforms

There are two discussions that provide some clues about how platforms came to wield such power. The first discussion is about “nudging.” A nudge is an effort to influence a person’s behavior positively without negating their freedom, that is, restricting their choices. The typical example is the arrangement of desserts in a cafeteria: if you put the fruits in front and the sweets in the back, you influence persons visiting the cafeteria to choose the healthy fruit over the sweet dessert, but you do not restrict the overall choice (they could still grab a sweet).

Nudging is a proto-biopolitical discourse. It shows how organizations started to think about their employees, staff, users, customers, students etc., as groups whose well-being is of importance and could be advanced by influencing their behavior. To be nudged is a basic experience when it comes to platforms. Google’s search engine for example sorts the results of your search queries by relevance (or what it considers relevance). This nudges you to only look at the first ten results, although you are perfectly free to look at all two million results. The more you use the search engine the more it is able to cleverly nudge you; that is, help you to “make the right choices” based on previous searches, visited websites, or your location. Interestingly, nudging — like biopolitics — needs a population, knowledge about this population, and opportunities to nudge. All of which platforms such as Google, Amazon, and Facebook have in abundance.

Another clue on the kind of power platforms have is found in the literature on platform economics. The general point here is to understand platforms as a new stage in the capitalist mode of production that relies on specific modes of accumulation by rent extraction. To be able to extract rents, platforms typically tend towards monopolization and try to become the only platform for a given area or need. The success of a platform is not primarily expressed in profits but in the number of users and their exclusivity.

Frank Pasquale hinted at a very important implication of this: what platforms actually want is to become regulators for a specific need or a certain area. They want to gain a monopoly and decide on the regulation of information retrieval, news consumption, socializing, room-letting, taxi traffic, or dating. What these platforms want, in other words, is what state governments normally do –they want to govern. Sure, as companies they, too, are interested in profits. But we should take their aspirations to “make the world a better place” or to “bring people together” more seriously, because this clearly indicates a will to be more than just a company that generates surplus value.

A question of power or: What is a biopolitical company?

On this, Foucault’s concepts of biopower and governmentality are helpful. In his famous lectures from 1977 and 1978, Foucault defines governmentality in the following way. “First,” he writes,

“by ‘governmentality’ I understand the ensemble formed by institutions, procedures, analyses and reflections, calculations, and tactics that allow the exercise of this very specific, albeit very complex, power that has the population as its target, political economy as its major form of knowledge, and apparatuses of security as its essential technical instrument. […] Finally, by ‘governmentality’ I think we should understand the process, or rather, the result of the process by which the state of justice of the Middle Ages became the administrative state in the fifteenth and sixteenth centuries and was gradually ‘governmentalized.’”

While governmentality in the first sense amounts to a specific kind of benevolent power exercised over a population by means of political economy and apparatuses of security, what Foucault at times calls biopower or biopolitics — the last sense — denotes a specific historical process in which the medieval state became a modern, governmental state.

When applied to platforms, these definitions show that these companies are governing populations — in the case of Facebook two billion people — and are not only preoccupied with managing the interactions and social life on their platforms but also the well-being of their users more broadly. The attainment of populations (as users), the knowledge on populations (big data) and the opportunities to govern them has been “pluralized” with the spread of networked personal computers and powerful algorithmic models. What long seemed to be exclusive to the state (statistics, a population, security apparatuses), is now attainable to other organizations. Platforms like Facebook, Google and Amazon understand best that this pluralization provides them with a new kind of power. Industrial companies have wielded sovereign or disciplinary forms of power inside the factory for a long time. Platforms, however, may be among the first companies to wield a different kind of power — biopower.

The rise of platforms may be understood as a process of governmentalization of companies equal to what Foucault had in mind when describing the medieval state. Foucault sought to understand how the sovereign power of the medieval state gave way to the biopolitical power of the liberal-democratic state. We are witnessing something similar when it comes to platforms. Platforms are as far away from their tayloristic and disciplinary precursors as the governmental state is from the sovereign rule of the king. Companies like Facebook, Google, or Amazon demonstrate the governmentalization of the company, and are — as a matter of fact — biopolitical companies.

Biopolitical companies and the democratic state

In his interview, Emmanuel Macron demanded that algorithmic models be transparent so that he could make sure for his people that they were not unfair. If we conceptualize Facebook, Google, Amazon, or Airbnb as biopolitical companies, we see that there is more to the problem of platforms than just transparency. Platforms are competing with democratically-elected state governments for power — and power here is the ability to govern populations. Instead of a friendly co-existence between platforms and states, this perspective suggests that state governments and biopolitical companies both seek to govern populations and that these populations are, in many instances, identical. So instead of making sure that algorithms are transparent and ethical, democrats like Macron should begin to understand that liberal-democratic states have, in the last decade, lost their monopoly on the government of their population.

This, too, shows a problem with the argument about the dissolution of the public sphere and the loss of the power of truth in public debate. This argument directly puts platforms like Facebook in a governing position by demanding they govern hate speech, fake news, and information flows. In doing so, we affirm Facebook’s role as a quasi-government and — irritatingly — demand to be governed even better by their algorithmic models and global cognitariat of content moderators and fact checkers. The discussion ultimately calls for an authority who decides what is true or false, and this authority is rarely thought to be the state. Instead, we implicitly or explicitly envision platforms to be this authority, which is not only problematic in its own right but also undermines the democratic state.

Cathy O’Neil closes her book Weapons of Math Destruction by denouncing “Facebook’s enormous power to affect what we learn, how we feel, and whether we vote. Its platform is massive, powerful, and opaque.” However, she concludes, Facebook and Google are not yet politically relevant because we “have no evidence that the companies are using their networks to cause harm.” This is precisely where a theory about the power of platforms has to intervene. Platforms do not have to cause harm to be politically relevant. Indeed, the fact that they are still seen as harmless, unproblematic tools should cause suspicion. This conclusion manifests an appalling lack of imagination when it comes to the political role of platforms. Instead, a critical political theory of platforms and the forms of power they hold, has to show how and in what ways these companies already deeply affect who we are and how we live together as a society.

Janosik Herder is a researcher in political theory at the University of Osnabrück. He has published on the political role of algorithms and the works of Michel Foucault, Gilles Deleuze and Karl Marx and is working on a genealogy of the concept of information.

This article was originally published on January 25, 2019.