Democracy is threatened by an arms race that the forces of deception are winning. Microtargeted computational propaganda, organized troll brigades, coordinated networks of bots, malware, scams, epidemic misinformation, miscreant communities such as 4chan and 8chan, and professionally crafted rivers of disinformation continue to evolve, infest, and pollute the public sphere. But the potential educational antidotes — widespread training in critical thinking, media literacies, and crap detection — are moving at a leisurely pace, if at all.
I started writing about the potential for computer-mediated communication in 1987, decades before online communication became widely known as “social media.” My inquiries about where the largely benign online culture of the 1980s might go terribly wrong led me to the concept of the “public sphere,” which had received a new lease of life thanks to the recent translation into English of Jurgen Habermas’ 1962 work on the subject. (Jürgen Habermas, The Structural Transformation of the Public Sphere) “What is the most important critical uncertainty about mass adoption of computer mediated communication?” was the question I asked myself. I decided that the most serious outcome of this emerging medium would have to do with whether citizens gained or lost liberty with the rising adoption of digital media and networks.
Although Habermas’ prose is dense, his ideas are simple. Democracies are not just about voting for leaders and policy-makers. Democratic societies can only take root in populations that are educated enough and free enough to communicate about issues of concern and to form public opinion that influences policy. For example, the civil rights and women’s movements involved civil disobedience, as well as political and judicial initiatives and electoral labor. But the overall effort was undergirded by the public opinion that emerged from arguments, demonstrations, and debates about the rights of these emerging publics. When he described the origins of 18th century constitutional republics in the coffee-house arguments among (white, male, bourgeois) political scholars and activists, Habermas also noted some future threats to the continued success of a public sphere defined by conversation.
Freedom of speech and the press are essential, but the Habermasian public sphere also depends on a modicum of “civil, rational discourse” among citizens. It also presumes that education and journalism are sources of reliable information upon which to build opinions and promote arguments. Habermas feared that the field of public relations would enable the wealthy and powerful to manipulate public opinion, and that a diminishing variety of news sources would warp journalistic integrity.
It doesn’t take much work to demonstrate that PR and media monopolies have indeed damaged the public sphere as a form of authentic public opinion that is supposed to emerge from informed discourse among citizens. What neither Habermas nor any of his contemporaries could have anticipated in around 1960 was the power of computationally targeted disinformation and the technological amplification of antisocial actors to warp the public sphere to the degree we see in the era of Cambridge Analytica and Gamergate. I include myself among those who, almost thirty years later, saw dangers of enclosure and manipulation, but failed to grasp the power of Internet-amplified surveillance capitalism, big data, attention engineering, and disinformation factories.
Solution 1: net literacy
Since I started writing about digital media and networks in 1987, I have been asked by critics, scholars, and myself: “Do the benefits of these newly-granted powers of intellect augmentation and many-to-many communication ultimately outweigh the negative impacts on our attention, discourse, and polity?” Around a decade ago, I decided that the answer was that “it depends on what people know — social media literacy — and how many people have this knowledge.” Personal computers and Internet accounts don’t come with behavioral user manuals or epistemological danger warnings. The knowledge and skills necessary to personally benefit from the digital commons and contribute to rather than damage the public sphere isn’t secret or hard to learn.
But in 2010, when I started writing about these literacies, few educational institutions were addressing issues of search and credibility, discourse and civility, behavior and citizenship. For the most part, they still aren’t. So I wrote a book that I hoped would inform the online behavior of today’s netizens — a guidebook a parent could give to a high school graduate and that parents could read themselves.
When I set out to write Net Smart: How to Thrive Online, I decided that five essential skillsets/bodies of lore/skills were necessary to thrive online – and by way of individual thriving, to enhance the value of the commons. They are: literacies of attention, crap detection, participation, collaboration, and network awareness:
- Attention because it is the foundation of thought and communication, and even a decade ago it was clear that computer and smartphone screens were capturing more and more of our attention.
- Crap detection because we live in an age where it is possible to ask any question, anytime, anywhere, and get a million answers in a couple seconds – but it is now up to the consumer to determine whether the information received is authentic or phony.
- Participation because the birth and the health of the Web did not arise from, and should not depend upon the decisions of five digital monopolies. It was built by millions of people who put their cultural creations and their inventions online, nurtured their own communities, invented search engines in their dorm rooms –and the Web itself — in a physics lab.
- Collaboration because of the immense power of social production, virtual communities, collective intelligence, and smart mobs, afforded by access to tools and knowledge of how to use them.
- Network awareness because we live in an age of social, political, and technological networks that affect our lives, whether we understand them or not.
In an ideal world, the social and political malignancies of today’s online culture could be radically reduced, although not eliminated, if a significant enough portion of the online population was fluent or at least basically conversant in these literacies. In particular, while it seems impossible to stem the rising tide of crap at its sources, the impact of disinformation could be significantly reduced if most of the online population was educated in crap detection.
Solution 2: regulate and reform
In Net Smart, I pointed out that while they are often not sufficient, the most basic tools of crap detection are close at hand and easy to use. It isn’t difficult to search the name of the author to explore what others say about the author’s authority; compendia of tools for verifying medical, political, journalistic claims are a click away. Ten years ago, I promoted the utility of universal crap detection education as a prophylactic for manipulation and pollution of the public sphere; now I’m not so sure even widespread education will be sufficient in the face of super-amplified misinformation.
That’s where the arms race comes in.
The biggest obstacle to de-crapification is the power of Facebook, which is highly resistant to reform. As Siva Vaidhyanathan puts it, “the problem with Facebook is Facebook.” That is, Facebook profits by amassing detailed behavioral dossiers on billions of people, then selling to advertisers microtargeted access (“female,” “suburban,” “millennial” “high school education,” and myriad detailed behavioral characteristics) to the attention of those people. This is phenomenally more powerful than the PR apparatus that Habermas feared: old-school billboards or television advertising in terms of reaching exactly the population that an advertiser wants to engage. What Cambridge Analytica revealed is that political opinions, like toothpaste, can be sold more effectively this way. The impact of computationally targeted propaganda is multiplied by the coordinated activities of human trolls, who are in turn further amplified by armies of bots. Facebook can’t turn off the part that can covertly manipulate the public sphere without turning off its own main revenue stream.
Just as the problem with Facebook is its business model, many of the destructive influences of the Internet on the public sphere grow from the same new powers that benefit millions by creating communities across space. If you believe you are the only gay teen in a small town, or are a caregiver for an Alzheimer’s patient who is homebound, you can connect with others who share your challenges. If you have a rare disease that only one in a million people have, there are two thousand others online, and you can connect with them.
But again, that power does not discriminate, even in ways that might seem appealing. The same power to connect with strangers who may share fringe interests is also useful to Nazis, anti-vaxxers, and flat earthers. Computational propaganda based on surveillance capitalism and applicable to manipulating the political opinions of billions, however, derives from the huge data-mining and analysis capabilities of giants such as Facebook and Google.
Google’s YouTube is another example of unintended consequences that damage the public sphere. The algorithm that suggests to YouTube viewers videos that may interest them may radicalize young people. Users who start out with an interest in gaming, for example, are eventually directed to videos by extremists. Again, this result was not planned, but is inextricably linked to YouTube’s revenues. Engagement — continued attention — benefits YouTube’s ability to target ads. And Holocaust videos, medical malinformation, and recruiting videos made by terrorists turn out to be engaging content to some people.
But the central profit-making feature of social media — capturing attention — is also a place of entry for reform. One aspect of social media threats to mindful engagement in the public sphere that can be addressed both through design and education is the hijacking of attention for profit. The reason social media behemoths amass detailed dossiers on the behavior and habits of billions of people is that knowing each person’s interests makes it possible to tailor advertising to their attention. And the longer an app can engage a user’s attention, the greater the opportunity to present advertising. Attention capture by smartphones is easily observed on any city street, where often a majority of the population are looking at their phones rather than where they are going. More frightening are the texting drivers.
Smart phones, and their apps, are built to promote this behavior, and we could change how they are built. Sustaining attention is the foundation of “engagement,” an attention-retention metric that drives the algorithms that escalate suggested videos from e-games to Jihadi or Nazi recruitment propaganda. Both the capture and retention of attention are becoming more effective as engineers deliberately design into their systems the same understanding of human attention that designers of slot machines understand: distracting signals that trigger FOMO (the little colored dot that indicates new email, new posts, new likes), intermittent reinforcement (cultivating dopamine dependency), and other aspects of user interface design that capture and hold attention for the purposes of advertising. This is one challenge where ethical design, while perhaps an idealistic goal, is a possible means of ameliorating the attention-draining tech at the source. Tristan Harris has written about this possibility eloquently. Again, I see little incentive for a profitable enterprise to dial back part of their profit-making machinery without political pressure (e.g., YouTube brought in $15 billion in 2019): The public sphere must amass enough public opinion to influence regulation and legislation.
I confronted issues of attention in the classroom during my decade of teaching at UC Berkeley and Stanford — as does any instructor who faces an audience of students who are looking at their laptops and phones in class. Because I was teaching social media issues and social media literacies, it seemed to me to be escaping the issue by simply banning screen time in class — so we made our attention one of our regular activities. I asked my co-teaching teams (groups of three learners who took responsibility for driving conversation during one-third of our class time) to make up “attention probes” that tested our beliefs and behavior.
Testing attention is a well-established practice, of course. When I researched attentional discipline for Net Smart, I found an abundance of evidence, from millennia-old contemplative traditions to contemporary neuroscience, for the plasticity of attention. Simply paying attention to one’s attention — the methodology at the root of mindfulness meditation — can be an important first step to diminishing distraction. Yet, it doesn’t seem that attention engineers, despite their wild success, have the overwhelming advantage in the arms race for attention education against surveillance capitalists and computational propagandists deploying big data, bots, and troll armies.
This lopsided arms race is what leads me to conclude that education in crap detection, attention control, media literacy, and critical thinking are important, but are not sufficient to rebuild the public sphere. Regulation of the companies who wield these new and potentially destructive powers will also be necessary. I don’t pretend to know enough to recommend how to craft such legislation, but if I could choose a panel of experts to make recommendations, I definitely would trust the recommendations of a panel that includes danah boyd,Zynep Tufecki, Cory Doctorow, Ethan Zuckerman, Shoshana Zuboff, Brewster Kahle, Tim Berners-Lee, Renée DiResta and Tim Wu. The list over-represents Americans (Tufecki was born in Turkey, Doctorow in Canada, Berners-Lee in Great Britain); it needs people from other countries, more women and more people of color. Putting together the world’s best committee is absolutely not a guarantee that political actors will turn their recommendations into effective policy. In the face of the dangers facing the public sphere, however, I think we should not shy away from dreaming of big solutions.
There are other reasons to curb the monopoly power of digital mega-corporations, but if the damage to the public sphere they inadvertently enable, is to be mitigated, legislative regulation is a necessary start. Teaching middle schools students how to search, requiring high school courses in critical thinking online, educating parents as well as students is also necessary — but without regulation, educators are bringing textbooks to a massive data fight.
Howard Rheingold is a pioneering internet intellectual, the author of numerous books about digital technology, and one of the founding members of the WELL, the first networked community. You can read more about him here.