Mark Zuckerberg announced his plan to make Facebook more private in April 2019. Photo credit: Anthony Quintano/Wikimedia Commons


In 2018, as we were all still digesting how disinformation campaigns upended the 2016 election, technologist and internet critic Jaron Lanier published a manifesto. Ten Arguments for Deleting Your Social Media Accounts Right Now (Henry Holt) introduced readers to the methods by which platforms like Facebook and Twitter not only capture and hold your attention but exploit the quality of your attention for profit. “Are you sad, lonely, scared?” Lanier asked. “Happy, confident? Getting your period? Experiencing a peak of class anxiety?” Algorithms pick up this data and exploit on behalf of, as Lanier puts it, “so-called advertisers.”

Lanier believes that marketing on social media is fundamentally different from non-digital marketing techniques because earlier forms of advertising—while often manipulative—did not rely on constant customer surveillance or a customer whose attention was maximized by the technology itself. And that technology, Lanier argued persuasively, was geared towards keeping users on the platform by promoting the darkest human emotions.

While some of what we know about social media manipulation involves adults, it’s easy to see why the most effective criticism focuses on harming children and teens. The younger a person is, the less likely they are to be self-regulators and critical thinkers, while more likely to care what others think. In addition, teenagers are, by nature, emotionally labile and are historically most likely to be seen as vulnerable to media storytelling.

What’s complicated about the history I just sketched is that it was not at all clear that media did affect how the young—or anyone else—behaved for much of the twentieth century. Examples of this are the efforts to suppress movies about crime in the 1930s, the panic over comic books fueled by psychiatrist Fredric Wertham in the 1950s, Cold War anxiety about the effects of rock n’ roll-on teens, and dubious reports that television violence was fueling actual violence in the 1970s. And don’t forget Tipper Gore’s campaign for federal regulation of explicit lyrics and violent video games at the end of the twentieth century.

And is it any surprise that the image-heavy platforms Instagram and TikTok fuel “body dissatisfaction and disordered eating” or that social media shapes “our concept of beauty?” Back in 2016, yet another scientific study linked the unrealistic Barbie doll figure to compulsive female dieting. In 2013, similar research linked eating disorders in boys to proliferating images of “ripped” men with six-pack abs in the media.

But separating tendencies from actual media effects, and measuring the harm, isn’t easy. For example, if some boys develop eating disorders to develop a six-pack, and others don’t, can we blame the medium that has spread that message?

Yes, but that blame needs to be explicitly articulated if the medium is to be held accountable. And what is new about the attacks on Facebook is not the focus on damaging content but identifying the technology itself as having been deliberately engineered to cause and intensify negative emotion. But, of course, content matters. Critics rightly deplore the misinformation campaigns that warp our democracy, the bullying that intensifies in a virtual environment, and the performativity that allows individuals to broadcast brutal crimes.

But increasingly, Facebook itself—not its creators and users—is being held responsible. Why?

The simple answer is the increasing evidence that Facebook knows about its damage and exploits that damage to extend “sessions” (uninterrupted time on the platform). Mark Zuckerberg’s constant disavowals and apparent lies about this strategy have fueled the fire, leading to a movement to log out of Facebook on November 10 in protest. Furthermore, Facebook and its associated platforms deliver powerful broadcasting and conversational technology into the hands of users without being able to moderate properly what users, or malicious organizations, post.

But, Micah Sifry reminds us at The Connector (October 26, 2021), we also have increasing evidence that the problem with Facebook may not be Facebook at all—it’s Mark Zuckerberg. After all, platforms do not engineer themselves: they are engineered to affect company priorities and policies. Employees have been warning “the boy king” for years about real-world damage—ethnic violence, white supremacist organizing, bullying—that the platform promotes, and in his single-minded pursuit of a global attention monopoly, ignores and suppresses that knowledge.

As Sifry writes,

“We are now collectively all close to a par with Facebook employees, in terms of having access to a few megatons of internal research on how the giant platform warps the world and not having any power to get change done. (Here’s a growing Google Doc of nearly 100 stories that have been published on the Facebook documents across more than a dozen outlets, helpfully compiled by former Facebook VP Katie Harbath.) One person, Zuckerberg, has a majority of governing stock there, a unique concentration of power. The company’s real customers, advertisers, aren’t leaving it because its targeting tools are both cheap and extremely powerful. And right now I see no signs that last year’s successful Stop Hate for Profit advertising boycott is about to go back to the barricades. (The boycott is over, so it’s ok to hate for profit again?)”

The Facebook logout is a four-day effort “to hit Zuckerberg in the only part of his body that he apparently cares about, his wallet.” Indeed, we know that the company lost millions of dollars the day Facebook went dark for six hours. The stock tumbled, and Zuck himself lost billions of net worth. You can take the pledge to log out for one day here.

But wait! Before we all agree on blowing up the social media giant, let’s remember that it also does good, so why not regulate it as other media corporations are regulated? Why is this so unthinkable or undoable? Killing Facebook might even be a terrible idea. Like the telephone, social media has altered human social relations completely—there is no putting that genie back in the bottle. It has made it possible to expand and deepen social relationships and, at the same time, reduce the effort necessary to maintain even the closest ties.

For example, when someone from one of my past lives dies, no one ever calls anymore. I always find out on Facebook, making it hard to imagine not visiting the stupid thing at least once a day, even though I rarely post anything but this newsletter. I try to stay out of conversations that can turn toxic in seconds. Furthermore, I know dozens of home-bound people for whom social media apps are a lifeline to old friends, family, helpful political discussions, and news. Yes, this is also precisely why the platform is an effective misinformation super-spreader.

But what was once engineered to emphasize profit-making can be re-engineered to promote social good. So Facebook, please keep the good, get rid of the bad, and make an ongoing commitment to addressing problems as they arise. That’s what responsible capitalism does, and it’s what many Facebook employees have wanted for years.

A one-day logout will not achieve change at Facebook, although it may demonstrate how many users want that. Reforming social media is a political job. We need to pressure politicians to do it, not through censorship, but via regulation that acknowledges a public good that social media companies must respect in exchange for using public communications infrastructure to make billions of dollars a year.


Claire Bond Potter is Professor of Historical Studies at The New School for Social Research and co-Executive Editor of Public Seminar. Her most recent book is Political Junkies: From Talk Radio to Twitter, How Alternative Media Hooked Us on Politics and Broke Our Democracy (Basic Books, 2020). This essay is adapted from a post on her Substack, Political Junkie.