What is Facebook?
I ask this question as I am well embarked on a two-week intensive seminar, “Democratic Crisis and the Politics of Social Media,” part of the 27th annual Democracy and Diversity Institute sponsored by The New School’s Transregional Center for Democratic Studies in Wrocław, Poland. Facebook and Twitter, and the unregulated (often untruthful) political narratives they disseminate will be a key focus, as will the ways that prior technologies such as radio, television and newspapers prepared us to use social media. My students are from Russia, Ukraine, Georgia, Poland, and the United States, and we are taking a deep dive into critical questions of democracy and democratic movements under Web 2.0.
Together, my students and I seek to understand the recent history of democratic crisis by examining the rise of a global digital public sphere. In the past three decades, the politics of social media have been both aspirational and cynical. While increased communication within and across national borders, as well as the possibility of instant translation, can inspire global democratic organizing, digital communication has also fueled authoritarian and anti-democratic coalition building. The benefits of social media are not abstract: it fuels resistance movements; supports access to privileged information, local journalism, and fact checking; and powers networks that guide refugees and immigrants fleeing state violence. Yet the same apps and digital tools have also fueled the rise of nationalism, authoritarianism, surveillance and global terror. Using Benedict Anderson’s Imagined Communities (1983) as a provocation, we will chart the similarities and differences between social media and its non- digital predecessors, work to understand the present terrain in which citizens manage information and imagine principles that might guide a democratic digital public sphere.
But I am now pressed by a second, more parochial concern, one that I have thought about a great deal, but that has become newly urgent in the past two weeks. President Donald Trump has nominated a new candidate for the Supreme Court, Brett Kavanaugh, to replace retiring Associate Justice Anthony M. Kennedy, and a battle is brewing. To be honest I am ambivalent about Senate Democrats’ intention to derail the nomination, as Merrick Garland’s nomination to the Court was derailed at the end of the Obama administration. I don’t like it that both parties seek to rule the country from the judiciary, and that my fellow Democrats do not understand—as Donald Trump invokes broad powers to remake all the federal courts for decades to come—that such politics make our democracy volatile and unstable.
Most of my friends are also priming for a fight over abortion and gay marriage, rights that were won in the court, and could be lost there. They are concerned that the battle against the administration’s nativist and violent immigration policies, which have been blocked and parried in the courts, will be lost. I worry about these things too. But I also worry that movement politics are obscuring pressing concerns about democracy, specifically in the digital realm. Which is why I would like Brett Kavanaugh to be pressed for views on what Facebook is, and what its relationship to governance should be, a question that has enormous ramifications for the future of our democracy.
Here’s an example of something I would like any future SCOTUS nominee, and the future clerks of that nominee, to be actively thinking about: according to a 2017 Pew report, almost 50 percent of voters and potential voters get most of their news online, and a significant percentage are most likely to read news referred to them by friends. This evolution was not anticipated by those of us who joined the social media revolution between 2006 and 2008, but it is what has happened: that algorithms written by powerful corporations have now become the gatekeepers for information flow has become an urgent issue for what Nancy Fraser has called “actually existing democracy.”
Yet Facebook, and social media companies more generally, have carved out a new kind of space for themselves by marketing their platforms as an arena for free and open conversation while, at the same time, enforcing limits to free speech that accrue to privately-owned space. These boundaries are also shifting at a rate much faster than in “the real world,” given how long it takes to pass a piece of legislation. This may be new judicial territory, and a canny evasion that the traditional branches of government are, as yet, unprepared to address.
In April 2018, Facebook published updated Community Standards that do—and do not—help us imagine how the platform will evolve over the next several years as a private space that sells itself as a public square. Many of these standards prohibit things that are already illegal. For example, in the category of “safety,” Facebook now prohibits the promotion of harassment, sexual exploitation of a child or an adult, terrorism, speech that incites people to violence, and republishing copyrighted material without permission.
Facebook also explicitly prohibits giving advice about things that are illegal, such as how to join ISIL; as well as speech that might potentially be actionable, and that some ethicists would classify as wrong, such as how to cultivate an eating disorder. You can post graphically violent images if you are educating people about violence, but not for your own or someone else’s enjoyment (good luck figuring that one out). And surprisingly, there are a range of horrible images—cannibalism, partially burned human bodies, wounded or dead people with visible internal organs—that Facebook explicitly permits, with a warning that they should only be viewed by adults.
Most importantly perhaps, in the interest of establishing standards after it apparently became an open access repository for Russian disinformation and an arm of the Trump campaign, Facebook is deleting some posts that it deems too political. I say “some” because the same highly emotional, and questionably unfactual, utterances I have been seeing for years, and that have gotten ever worse since Donald Trump took office 18 months ago, still seem to be available in my own feed.
But my Facebook feed has been changing in significant ways because Facebook decides—and has always decided—what should be meaningful to me. There are more ads; more news about celebrities; and more videos of kittens and puppies that, Facebook informs me, my friends have “liked” and they think I might like too. (I don’t. I have live kittens and puppies, and don’t want may time wasted with virtual ones.) It means more news from family and friends, regardless of whether those connections share my interests, preoccupations or concerns. In a sense, Facebook seems to want to turn back the clock, asking us all to step back from the precipice of political engagement that we have been on since the campaign of 2016.
Facebook’s desire to make my life more enjoyable and less stressful may also reduce the platform’s capacity to educate me. There are fewer news items and critical articles. A smaller circle of “friends” in my feed appears to mean that I communicate intensely with fewer than a dozen people now—or about half as many as the 25 that Facebook said I would be encountering regularly—because intense engagement with a few people seems to narrow the algorithm. Instead of links to what my friends are reading, I tend to get links to what they are eating—or where they are traveling. Moreover, Facebook has intensified our capacity to self-silo with no social consequences: without unfriending people, I can adjust my settings so that people who annoy me don’t see my posts, and I don’t see theirs.
Things also disappear, a development that is even more unsettling now that I am in close discussion with students from countries where state censorship, and collaboration of Internet companies with state security, is real. Recently I have been seeing outraged posts from people who claim that their Facebook utterances have inexplicably disappeared. “Thanks for reporting me, whoever you are,” snarled one woman, who had posted a picture of a completed attic renovation. “Not sure what was offensive to you, but Facebook took my post down.” A renovated attic? Was it an enraged socialist in our network who believed she ought to have acknowledged her “privilege”? A design troll? Indeed, in an effort to clean up its act without hiring editors, Facebook has encouraged us to report each other which, given the volatility of social media more generally, might have the unintended consequence of creating new channels of Stasi-style cyber-revenge. But it is more likely that a word or phrase, used innocuously in the case of a home renovation, triggered a Facebook filter to remove her post.
To return to an earlier point, however, what I have noticed is that Facebook seems to be redefining friendship itself as the exchange of private information, walling “friendship” off from the realm of ideas, politics and citizenship. It has reformulated its scrapbook-style news-aggregation style to exclude anything but corporate news outlets and reduced the distribution of news more generally. Smaller publications that publish critical opinion pieces like Public Seminar, Dissent, The Nation, and Jacobin (to name a few I never see in my feed any more unless I put them there) may be falling into the net of Facebook’s lame effort to combat fake news.
Exploring Facebook’s new standards, issued in April 2018, following Mark Zuckerberg’s public spanking in Congress, suggests that this is indeed the case. “We want to help people stay informed without stifling productive public discourse,” Facebook says in its statement on false news in the Integrity and Authenticity section. “There is also a fine line between false news and satire or opinion. For these reasons, we don’t remove false news from Facebook but instead, significantly reduce its distribution by showing it lower in the News Feed.” How low? So far below the kittens and eBags ads that I might have to stay on Facebook for much of the morning to find them.
This may also be an effort by Facebook to extract fees for distribution that used to be free, fees that corporate news outlets can pay, but low- or non-profit publications that specialize in critical thought for a general audience cannot. The numerous ads I receive on Messenger, urging me to pay to “promote” our Public Seminar essays imply, although they do not prove, that this is true. But if Facebook were understood as broadcasting channel, this kind of tinkering might be investigated, legislated, and even litigated, but that would require a shared political and legal understanding of what Facebook is, and that isn’t likely to happen in the Trump administration. In fact, outside media studies circles, there is virtually no advocacy for such an abstract task.
Why? I am sure there are many reasons, but one thing I believe is that few people in Washington, in either party or any branch of government, understand digital technology and the forms of communication that are now woven into our daily life. There is little assessment among the political classes about how the digital realm shapes our citizenship, except insofar as they represent ways to deliver services on the cheap (health care consultation, and banking); create for-profit enterprises that used to be acknowledged to be a public right (education); run political campaigns and collect small dollar donations; and deliver journalism products that run the gamut from actual, fact-checked news and opinion to listicles and unsubstantiated rumor.
This is something that government should do and is a task particularly suited to the law and legal thinking. The courts, and particularly the Supreme Court, have often functioned as guidance for what kinds of legislation and regulation operate in the public good. The realm of media law is growing by leaps and bounds to incorporate the many challenges to conventional communications and intellectual property issues that digital technology and social media promote. But that has not been reflected in any real engagement between Congress and the technology companies, in part because those companies make massive political donations to ensure they won’t be required to have these conversations. Google and its affiliates have donated over $2.6 million in the current campaign cycle alone, and in 2016 they donated almost $9 million: three-quarters of that money goes to Democrats. If you go to Govtrack.com and look at the 54 bills that were introduced into the House and Senate in 2017 that involve the Internet, most see digital technology as a route to something else—the delivery of services or education—or as an infrastructure that requires national defense, i.e., cybersecurity. Put “social media” in the search bar, and you get pretty much the same thing—the one bill in this category that passed and was signed by the president, is a defense appropriations bill and bears no direct relationship to social media as it is used by citizens in daily life.
Who is surprised? The resistance to regulation that is baked into the technology industry is only matched by the enthusiasm for the free market that exists in varying degrees on both sides of the aisle. Yet, that which Congress has no interest in legislating is also what Congress does not understand; and social media moves at a speed which neither Congress or the judiciary may be keeping up with. Imagine: Facebook has been around since 2004, open to the public since 2006, and a publicly-held company since. Yet Mark Zuckerberg was never once invited to chat with Congress about his product, even after a 2012 IPO that triggered a Securities and Exchange Commission investigation, until it became part of an investigation into the possible campaign fraud in 2016.
The question among media scholars is: what is the entity being regulated, who, theoretically, might be the regulator, and what are the precedents for either regulation or freedom? Is a platform like Facebook a channel, like television, that has access to public airwaves and obligations to the public? Is it a newspaper, for which one might reasonably turn to the courts for relief, under the multiple precedent-setting decisions that have established the public’s right to accurate information about governance? Is it only a corporation that can be sued for harm? (The answer to the third question is, bizarrely, yes—and no.)
Because of these ambiguities, social media companies are happily left to regulate themselves, and consumers are left having to adjust on the fly to an information environment that changes as the tools they have grown used to using change. This means that consumers are not only left to adapt to new experiences that they have no critical perspective on, they may be lacking the capacity to locate the information that is theoretically available to them.
The cultural, political and intellectual impact of social media is highly fluid, something that users and entrepreneurs may be able to accommodate, but disengaged legislators and judges may find almost impossible to grasp because they represent a tremendous burden of knowledge acquisition. Legal scholars who do understand and promote thinking about our digital public sphere—Cass Sunstein, Lawrence Lessig—saw their chances for a SCOTUS seat die in the waning hours of November 8, 2016. But this does not have to be an issue of the left. This is why Brett Kavanaugh ought to be able to demonstrate familiarity with the media world as it exists today—and why that media world has everything to do with actually existing democracy.
Claire Potter is professor of history at The New School, and Executive Editor of Public Seminar, currently teaching in the Democracy and Diversity Institute, Wrocław, Poland. You can follow her on Twitter.