Image credit: Blue Planet Studio / Shutterstock.com
The following is an excerpt from an essay first published in Social Research: An International Quarterly. It is part of the journal’s issue Photography and Film as Evidence.
I don’t know if I have ever fully trusted what I see. As Trevor Paglen points out in his 2016 essay Invisible Images, photographs and other kinds of technological images can’t be understood today as “representations”, but rather as “activations” and “operations”. His essay positions them within a network larger than just object, maker, and observer. Once you begin to account for what images do—what they deploy, what they inspire, what they make possible—you can begin to see how direct access to the archive becomes one of the most powerful tools for visualizing and dictating the future. Just imagine: an image of everything worth remembering. The ability to remove from circulation anything you’d like everyone to forget. Today, when the vast majority of the trillions of images produced every second live their entire virtual lives unseen by human eyes, what an image depicts and why matter less than the fact that the image is visible at all.
Paglen argues that the invention of machine-readable images threatens to curtail human autonomy to a previously unimaginable degree: his term “invisible images” refers to machine-readable images that “can only be seen by humans in special circumstances and for short periods of time.” Theoretically, these digital files, stored as pure code, never have to be processed into a visual that is legible to humans. More broadly, invisible images are ones whose human, visual functions are secondary to their mechanical, hidden functions. They may be visible at times or in certain forms, but a gap has opened between how we perceive images and how they are used. Photos posted on Facebook, for example, may seem harmlessly “analogous to the musty glue-bound photo albums of postwar America.” But whether a post is liked or shared is of secondary importance to the platform: “When you put an image on Facebook or other social media, you’re feeding an array of immensely powerful artificial intelligence systems information about how to identify people and how to recognize places and objects, habits and preferences, race, class, and gender identifications, economic statuses, and much more,” Paglen (2016) writes. “Regardless of whether a human subject actually sees any of the 2 billion photographs uploaded daily to Facebook-controlled platforms, the photographs on social media are scrutinized by neural networks with a degree of attention that would make even the most steadfast art historian blush.”
Appearing in the year of Palantir, Cambridge Analytica, and data-profiled elections, Paglen’s essay articulates a broader social consensus around a certain belief long held by cultural theorists—that there is something different, something especially dangerous, about digital images. The root of the word “image” means to copy or to imitate, and, as Paglen points out, in the 1990s “there was much to do about the fact that digital images lack an ‘original.’” Without a source, they are no longer beholden to analogue notions like reality. In Towards a Philosophy of Photography, the media theorist Vilém Flusser argues that one radical avenue for photographers is “to release themselves from the camera, and to place within the image something that is not in its program” by exposing the mechanisms of the black box and attempting “to create unpredictable information” (1983, 81).
Made in 1957, the first digital image was a scan of an analog photograph, but even by the 1880s, the physicist Shelford Bidwell had begun to develop “telephotography,” a process for converting photographs into transferable data. Digital images are an extension of photography, which Flusser identifies as a form of “technical image.” Photography itself emerged at the end of a long history of visual consumption wherein visuality came to be understood as a fluid mental construction, subject to any number of internal and external factors, as opposed to a rigid, tactile, and purely physical system. This history is well documented, especially by the scholar Jonathan Crary, whose book Techniques of the Observer (1992) maps the evolution of the relationship between viewer and image, subject and object, and the various “modes of work” that humans have developed to process different kinds of images, from paintings to optical toys to photographs. Flusser argues that “apparatuses” such as cameras take over the mental processes that would otherwise occur when you read an image. A photograph’s space is once again codified, not as an external visual system but as an objective image—“objective” because a photograph isn’t usually separated from the visual mechanics of the camera. (There are obviously exceptions, and those are the photographs Flusser is most interested in.) This is the milieu in which digital images emerged.
Computers take this displacement one step further, because their optics aren’t stable. An algorithm is even more of a black box than a camera, because it can generate images according to an evolving rule set. This is why technologies like virtual and augmented reality are so jarring until you learn how they work: we are constantly being introduced to entirely new ways of seeing that are only partially or sometimes based on human sight. By the time you get to convolutional neural networks and other AI image generators, the parameters are often barely understood, if they are at all, by the people who use them. Flusser’s avenue is fast closing: data mining is only strengthened by a diversity of signals, as “adversarial images simply get incorporated into training sets used to teach algorithms how to overcome them,” Paglen (2016) writes, referring to generative adversarial networks, part of a class of machine learning that seeks to eliminate the need for manual supervision of AI training.
Here, Paglen’s premise appears inevitable: decoupled from the body, images don’t need to be seen at all. Invisible images are intrinsically meaningless, evidence of nothing. They include unposted selfies and forgotten B-roll, livestreams with no viewers, cloud or closed-circuit security footage. They are images that algorithms make to communicate directly with other algorithms. (That is one way they talk.) They are artworks stored in underground climate-controlled facilities. Most images exist invisibly at some point or another, and the delegation of access to them becomes a moral question: Who is allowed to see the models of our society, and to what end? Who gets to see what the machine sees or to tell it where to look? Who gets access to art in its original form and who is responsible for sorting through and moderating all the content that we would rather not see?
Absent human perspective, invisible images have no politics or ethics, but politics and ethics are not automatically dissolved. Invisible images are not necessarily unprocessed images. Seen or unseen, the primary function of images today is as part of a vast dataset feeding ever-more complex and socially integrated managerial systems. Invisible images inform everything from policing to electioneering, advertising to city planning, and they don’t operate through the familiar stratagems of propaganda and mass media, with their archaic dependence on sight. They are instead subject to various modes of processing, which the viewer, as it were, has absolutely no insight on. An invisible image may go unseen and unliked, but it is always collected—it is how advertising companies know you are pregnant before you do; how Google knows the daily-shifting boundaries of occupied Palestine and Ukraine and Nagorno-Karabakh; how Amazon knows who you will vote for, the dimensions of your bedroom, and exactly which toothbrush you will like. “The invisible world of images isn’t simply an alternative taxonomy of visuality,” writes Paglen (2016). “It is an active, cunning exercise of power, one ideally suited to molecular police and market operations—one designed to insert its tendrils into ever-smaller scales of everyday life.” What is there to be done in the face of such a monumental and inscrutable system? His essay advocates for artists to create “a safe house in the invisible digital sphere,” by which he means an area beyond the reach of data mining and other “predations of a machinic landscape.”
Invisible images have only become more ingrained since Paglen coined the term, and their dangers more acute. Shoshana Zuboff’s brilliant book The Age of Surveillance Capitalism (2019) explores the implications of data mining (which is contingent on the generation and processing of invisible images) and the unprecedented ways that it dictates everyday activity, enforcing everything from shopping habits to totalitarian rule. “Surveillance capitalism operates through unprecedented asymmetries in knowledge and the power that accrues to knowledge,” she writes in the introduction. “Surveillance capitalists know everything about us, whereas their operations are designed to be unknowable to us. They accumulate vast domains of new knowledge from us, but not for us. They predict our futures for the sake of others’ gain, not ours” (11; emphasis in original).
How central is the act of looking in such an environment? Less so than it has ever been. Cultural criticism’s historic emphasis on the subject-object relationship leaves us alarmingly few precedents for discussing hidden visual systems. The few writers who have broached the topic understand it as something epochal: we are living in “the universe of technical images” (Flusser), “the new frontier of power” (Zuboff), and “the age of planetary civil war” (Steyerl). But in day-to-day discourse, journalists, art critics, and academics seem increasingly adrift in this brave new world.
The era of everyman-an-artist, which dawned decades ago with the invention of the 35-millimeter camera, gave way to the era of the curator. Today everyone understands, on some level, that identity, writ large, is constructed through the collective visual realm. The time of the curator was not without its doomsayers, and it turns out that monoculture, perfectly curated and moderated, constitutes the perfect dataset. But that is now the least of our worries. That data is now applied by models that determine our future, or so the engineers would have us believe, and the future is rendered as an endless remix of this stand-in for the Ideal. The question now is: How do we believe in a world whose future is foreclosed, where every outlier, every new alternative—no matter how exciting or banal—is treated as a glitch, an anomaly, and promptly edited out? “If we want to understand the invisible world of machine-machine visual culture,” writes Paglen (2016), “we need to unlearn how to see like humans.”
Will Fenstermaker is a writer based in New York. His work has been published by the Metropolitan Museum of Art, Frieze, Artforum, The Nation, T Magazine, and more.