Lev Manovich wrote the standard text on ‘new media’, back when that was still a thing. It was called The Language of New Media (MIT Press 2001).Already in that book, Manovich proposed a more enduring way of framing media studies for the post-broadcast age. In his most recent book, Software Takes Command (Bloomsbury 2013) we get this more robust paradigm without apologies. Like its predecessor it will become a standard text.
I’m sure I’m not the only one annoyed by the seemingly constant interruptions to my work caused by my computer trying to update some part of the software that runs on it. As Cory Doctorow shows so clearly, the tech industry will not actually leave us to our own devices. I have it set to require my permission when it does so, at least for those things I can control it updating. This might be a common example that points to one of Manovich’s key points about media today. The software is always changing.
Everything is always in beta, and the beta-testing will in a lot of cases be done by us, as a free service, for our vendors. Manovich: “Welcome to the world of permanent change – the world that is now defined not by heavy industrial machines that change infrequently, but by software that is always in flux.” (2)
Manovich takes his title from the modernist classic, Mechanisation Takes Command, published by Sigfried Giedion in 1948. (On which see Nicola Twiley, here.) Like Giedion, Manovich is interested in the often anonymous labor of those who make the tools that make the world. It is on Giedion’s giant shoulders that many students of ‘actor networks’, ‘media archaeology’ and ‘cultural techniques’ knowingly or unknowingly stand. Without Giedion’s earlier work, Walter Benjamin would be known only for some obscure literary theories.
Where Giedion is interested in the inhuman tool that interfaces with nonhuman natures, Manovich is interested in the software that controls the tool. “Software has become our interface to the world, to others, to our memory and our imagination – a universal language through which the world speaks, and a universal engine on which the world runs.” (2) If you are reading this, you are reading something mediated by, among other things, dozens of layers of software, including Word v. 14.5.1 and Mac OS v. 10.6.8, both of which had to run for me to write it in the first place.
Manovich’s book is limited to media software, the stuff that both amateurs and professionals use to make texts, pictures, videos, and things like websites that combine texts, pictures and videos. This is useful in that this is the software most of us know, but it points to a much larger field of inquiry that is only just getting going, including studies of software that runs the world without most of us knowing about it, platform studies that looks at the more complicated question of how software meets hardware, and even infrastructure studies that looks at the forces of production as a totality. Some of these are research questions that Manovich’s methods tend to illuminate and some not, as we shall see.
Is software a mono-medium or a meta-medium? Does ‘media’ even still exist? These are the sorts of questions that can become diversions if pursued too far. Long story short: While they disagree on a lot, I think what Alex Galloway and Manovich have in common is an inclination to suspect that there’s a bit of a qualitative break introduced into media by computation.
Giedion published Mechanization Takes Command in the age of General Electric and General Motors. As Manovich notes, today the most recognizable brands include Apple, Microsoft and Google. The last of whom, far from being ‘immaterial’, probably runs a million servers. (24) Billions of people use software; billions more are used by software. And yet it remains an invisible category in much of the humanities and social sciences.
Unlike Wendy Chun and Friedrich Kittler, Manovich does not want to complicate the question of software’s relation to hardware. In Bogdanovite terms, it’s a question of training. As a programmer, Manovich (like Galloway) sees things from the programming point of view. Chun on the other hand was trained as a systems engineer. And Kittler programmed in assembler language, which is the code that directly controls a particular kind of hardware. Chun and Kittler are suspicious of the invisibility that higher level software creates vis-à-vis hardware, and rightly so. But for Galloway and Manovich this ‘relative autonomy’ of software of the kind most people know is itself an important feature of its effects.
The main business of Software Takes Command is a to elaborate a set of formal categories through which to understand cultural software. This would be that subset of software that are used to access, create and modify cultural artifacts for online display, communication, for making inter-actives or adding metadata to existing cultural artifacts, as perceived from the point of view of its users.
The user point of view somewhat complicates one of the basic diagrams of media theory. From Claude Shannon to Stuart Hall, it is generally assumed that there is a sender and a receiver, and that the former is trying to communicate a message to the latter, through a channel, impeded by noise, and deploying a code. Hall break with Shannon with the startling idea that the code used by the receiver could be different to that of the sender. But he still assumes there’s a complete, definitive message leaving the sender on its way to a receiver.
Tiziana Terranova takes a step back from this modular approach that presumes the agency of the sender (and in Hall’s case, of the receiver). She is interested in the turbulent world created by multiples of such modular units of mediation. Manovich heads in a different direction He is interested in iterated mediation software introduces in the first instance between the user and the machine itself via software.
Manovich: “The history of media authoring and editing software remains pretty much unknown.” (39) There is no museum for cultural software. “We lack not only a conceptual history of media editing software but also systematic investigations of the roles of software in media production.” (41) There are whole books on the palette knife or the 16mm film camera, but on today’s tools – not so much. “[T]he revolution in the means of production, distribution, and access of media has not been accompanied by a similar revolution in the syntax and semantics of media.” (56)
We actually do know quite a bit about the pioneers of software as a meta-medium, and Manovich draws on that history. The names of Ivan Sutherland, JCR Licklider, Douglas Engelbart, the maverick Ted Nelson and Alan Kay have not been lost to history. But they knew they were creating new things. The computer industry got in the habit of passing off new things as if they were old, familiar things, in order to ease users gently into unfamiliar experiences. But in the process we lost sight of the production of new things under the cover of the old in the cultural domain.
Of course there’s a whole rhetoric about disruption and innovation, but successful software gets adopted by users by not breaking too hard with those users’ cultural habits. Thus we think there’s novelty were often there isn’t: start-up business plans are often just copies of previous successful ones. But we miss it when there’s real change: at the level of what users actually do, where the old is a friendly wrapper for the new.
Where Wendy Chun brings her interpretive gifts to bear on Engelbart, Manovich is more interested in Alan Kay and his collaborators, particularly Adele Goldberg, at Xerox Palo Alto Research Center, or Parc for short. Founded in 1970, Parc was the place that created the graphic user interface, the bitmapped display, Ethernet networking, the laser printer, the mouse, and windows. Parc developed the models for today’s word proessing, drawing, painting and music software. It also gave the world the programming language Smalltalk, a landmark in the creation of object oriented programming. All of which are component bits of a research agenda that Alan Kay called vernacular computing.
Famously, it was not Xerox it was Apple who brought all of that together in a consumer product, the 1984 Apple Macintosh computer. By 1991 Apple had also incorporated video software based on the QuickTime standards, which we can take as the start of an era in which a consumer desktop computer could be a universal media editor. At least in principle: those who remember early 90s era computer based media production will recall also the frustration. I was attempting to create curriculum for digital publishing in the late eighties – I came into computing sideways from print production. That was pretty stable by the start of the 90s, but video would take a bit longer to be viable on consumer-grade hardware.
The genius of Alan Kay was to realize that the barriers to entry of the computer into culture were not just computational but also cultural. Computers had to do things that people wanted to do, and in ways that they were used to doing them, initially at least. Hence the strategy of what Bolter and Grusin called remediation, wherein old media become the content of new media form.
If I look at the first screen of my iPhone, I see icons. The clock icon is an analog clock. The iTunes icon is a musical note. The mail icon is the back of an envelope. The video icon is a mechanical clapboard. The Passbook icon is old-fashioned manila files. The Facetime icon is an old-fashioned looking video camera. The Newstand icon looks like a magazine rack. Best of all, the phone icon is the handset of an old-fashioned landline. And so on. None of these things pictured even exist in my world any more, as I have this machine that does all those things. The icons are cultural referents from a once-familiar world that have become signs within a rather different world which I can pretend to understand because I am familiar with those icons.
All of this is both a fulfillment and a betrayal of the work of Alan Kay and his collaborators at Parc. They wanted to turn computers into “personal dynamic media.” (61) Their prototype was even called a Dynabook. They wanted a new kind of media, with unprecedented abilities. They wanted a computer that could store all of the user’s information, which could simulate all kinds of media, and could do so in a two-way, real-time interaction. They wanted something that had never existed before.
Unlike the computer Engelbart showed in his famous Demo of 1968, Kay and co. did not want a computer that was just a form of cognitive augmentation. They wanted a medium of expression. Engelbart’s demo shows many years ahead of its time what the office would look like, but not what the workspace of any kind of creative activity. Parc wanted computers for the creation of new information in all media, including hitherto unknown ones. They wanted computers for what I call the hacker class – those who create new information in all media, not just code. In a way the computers that resulted make such a class possible and at the same time set limits on its cognitive freedom.
The Parc approach to the computer is to think of it as a meta-medium. Manovich: “All in all, it is as though different media are actively trying to reach towards each other, exchanging properties and letting each other borrow their unique features.” (65) To some extent this requires making the computer itself invisible to the user. This is a problem for a certain kind of modernist aesthetic – and Chun may participate in this – for whom the honest display of materials and methods is a whole ethics or even politics of communication. But modernism was only ever a partial attempt to reveal its own means of production, no matter what Benjamin and Brecht may have said on the matter. Perhaps all social labor, even of the creative kind, requires separation between tasks and stages.
The interactive aspect of modern computing is of course well known. Manovich draws attention to another feature, and one which differentiates software more clearly from other kinds of media: view control. This one goes back to Engelbart’s Demo rather than Parc. At the moment I have this document in Page View, but I could change that to quite a few different ways of looking at the same information. If I was looking at a photo in my photo editing software, I could also choose to look at it as a series of graphs, and maybe manipulate the graphs rather than the representational picture, and so on.
This might be a better clue to the novelty of software than, say hypertext. The linky, not-linear idea of a text has a lot of precursors, not least every book in the world with an index, and every scholar taught to read books index-first. Of course there are lovely modernist-lit versions of this, from Cortesar’s Hopscotch to Roland Barthes and Walter Benjamin to this nice little software-based realization of a story by Georges Perec.
Then there’s nonlinear modernist cinema, such as Hiroshima Mon Amour, David Blair’s Wax or the fanastic cd-roms of The Residents and Laurie Anderson made by Bob Stein’s Voyager Interactive. But Manovich follows Espen Aarseth in arguing that hypertext is not modernist. Its much more general and supports all sorts of poetics. Stein’s company also made very successful cd-roms based on classical music. Thus while Ted Nelson got his complex, linky hypertext aesthetic from William Burroughs, what was really going on, particularly at Parc, was not tail-end modernism but the beginnings of a whole new avant-garde.
Maybe we could think of it as a sort of meta- or hyper-avant-garde, that wanted not just a new way of communicating in a media but new kinds of media themselves. Kay and Nelson in particular wanted to give the possibility of creating new information structures to the user. For example, consider Richard Shoup’s, Superpaint, coded at Parc in 1973. Part of what it does is simulate real-world painting techniques. But it also added techniques that go beyond simulation, including copying, layering, scaling and grabbing frames from video. Its ‘paintbrush’ tool could behave in ways a paintbrush could not.
For Manovich, one thing that makes ‘new’ media new is that new properties can be always be added to it. The separation between hardware and software makes this possible. “In its very structure computational media is ‘avant-garde’, since it is constantly being extended and thus redefined.” (93) The role of media avant-garde is no longer performed by individual or artist groups but happens in software design. There’s a certain view of what an avant-garde is that’s embedded in this, and perhaps it stems from Manovich’s early work on the Soviet avant-gardes, understood in formalist terms as constructors of new formalizations of media. It’s a view of avant-gardes as means of advancing – but not also contesting – the forward march of modernity.
Computers turned out to be malleable in a way other industrial technologies were not. Kay and others were able to build media capabilities on top of the computer as universal machine. It was a sort of détournement of the Turing and von Neuman machine. They were not techno-determinists. It all had to be invented, and some of it was counter-intuitive. The Alan Kay and Adele Goldberg version was at least as indebted to the arts, humanities and humanistic psychology as to engineering and design disciplines. In a cheeky aside, Manovich notes: “Similar to Marx’s analysis of capitalism in his works, here the analysis is used to create a plan for action for building a new world – in this case, enabling people to create new media.” (97)
Unlike Chun or David Golumbia, Manovich downplays the military aspect of postwar computation. He dismisses SAGE, even though out of it came the TX-2 computer, which was perhaps the first machine to allow a kind of real time interaction, if only for its programmer. From which, incidentally, came the idea of the programmer as hacker, Sutherland’s early work on computers as a visual medium, and the game Spacewar.
The Parc story is nevertheless a key one. Kay and co wanted computers that could be a medium for learning. They turned to the psychologist Jerome Bruner, an his version of Piaget’s theory of developmental stages. The whole design of the Dynabook had something for each learning stage, which to Bruner and Parc were more parallel learning strategies rather than stages. For the gestural and spatial way of learning, there was the mouse. For the visual and pictorial mode of learning, there were icons. For the symbolic and logical mode of learning, there was the programming language Smalltalk.
For Manovich, this was the blueprint for a meta-medium, which could not only represent existing media but also add qualities to them. It was also both an ensemble of ways of experiencing multiple media and also a system for making media tools, and even for making new kinds of media. A key aspect of this was standardizing the interfaces between different media. For example, when removing a bit of sound, or text, or picture, or video from one place and putting it in another, one would use standard Copy and Paste commands from the Edit menu. On the Mac keyboard, these even have standard key-command shortcuts: Cut, Copy and Paste are command-X, C and V, respectively.
But something happened between the experimental Dynabook and the Apple Mac. It shipped without Smalltalk, or any software authoring tool. From 1987 it came with Hypercard, written by Parc alumnus Bill Atkinson – which many of us fondly remember. Apple discontinued it in 2004. It seems clear now that the iPad is a thing that Apple’s trajectory was in the long term away from democratizing computing and thinking of the machine as a media device.
And so Kay’s vision was both realized and abandoned. It became cheap and relatively easy to make one’s own media tools. But the computer became a media consumption hub. By the time one gets to the iPad, it does not really present itself as a small computer. Its more like a really big phone, where everything is locked and proprietary.
A meta-medium contains simulations of old media but also makes it possible to imagine new media. This comes down to being able to handle more than one type of media data. Media software has ways of manipulating specific types of data, but also some that can work on data in general, regardless of type. View control, hyperlinking, sort and search would be examples. If I search my hard drive for ‘Manovich’, I get Word text by me about him, pdfs of his books, video and audio files, and a picture of Lev and me in front of a huge array of screens.
Such media-independent techniques are general concepts implanted into algorithms. Besides search, geolocation would be another example. So would visualization, or infovis, which can graph lots of different kinds of data set. You could read my book, Gamer Theory, or you could look at this datavis of it that uses Bradford Paley’s TextArc.
Manovich wants to contrast these properties of a meta-medium to medium specific ways of thinking. A certain kind of modernism puts a lot of stress on this: think Clement Greenberg and the idea of flatness in pictorial art. Russian formalism and constructivism in a way also stressed the properties of specific media, their given textures. But it was interested in experimentally working in between then on parallel experiments in breaking them down to their formal grammars. Manovich: “… the efforts by modern artists to create parallels between mediums were proscriptive and speculative… In contrast, software imposes common media ‘properties.’” (121)
One could probably quibble with Manovich’s way of relating software as meta-medium to precursors such as the Russian avant-gardes, but I won’t, as he knows a lot more about both than I do. I’ll restrict myself to pointing out that historical thought on these sorts of questions has only just begun. Particular arguments aside, I think Manovich is right to emphasize how software calls for a new way of thinking about art history, which as yet is not quite a pre-history to our actual present.
I also think there’s a lot more to be said about something that is probably no longer a political economy but more of an aesthetic economy, now that information is a thing that can be property, that can be commodified. As Manovich notes of the difference between the actual and potential states of software as meta-medium: “Of course, not all media applications and devices make all these techniques equally available – usually for commercial and copyright reasons.” (123)
So far, Manovich has provided two different ways of thinking about media techniques. One can classify them as media independent vs media specific; or as simulations of the old media vs new kinds of technique. For example, in Photoshop, you can Cut and Paste like in most other programs. But you can also work with layers in a way that is specific to Photoshop. Then there are things that look like old time darkroom tools. And there are things you could not really do in the darkroom, like add a Wind filter, to make your picture look like it is zooming along at Mach 1. Interestingly, there are also high pass, median, reduce noise, sharpen and equalize filters, all of which are hold-overs from something in between mechanical reproduction and digital reproduction: analog signal processing. There is a veritable archaeology of media just in the Photoshop menus.
What makes all this possible is not just the separation of hardware from software, but also the separation of the media file from the application. The file format allows the user to treat the media artifact on which she or he is working as “a disembodied, abstract and universal dimension of any message separate from its content.” (133) You work on a signal, or basically a set of numbers. The numbers could be anything so long as the file format is the right kind of format for a given software application. Thus the separation of hardware and software, and software application and file, allow an unprecedented kind of abstraction from the particulars of any media artifact.
One could take this idea of separation (which I am rather imposing as a reading on Manovich) down another step. Within PhotoShop itself, the user can work with layers. Layers redefine an image as a content images with modifications conceived as happening in separate layers. These can be transparent or not, turned on or off, masked to effect part of the underlying image only, and so on.
The kind of abstraction that layers enable can be found elsewhere. Its one of the non-media specific techniques. The typography layer of this text is separate from the word-content layer. GIS (Geographic Information System) also uses layers, turning space into a media platform holding data layers. Turning on and off the various layers of Google Earth or Google Maps will give a hint of the power of this. Load some proprietary information into such a system, toggle the layers on and off, and you can figure out the optimal location for the new supermarket. Needless to say, some of this ability, in both PhotoShop and GIS, descends from military surveillance technologies from the cold war.
So what makes new or digital media actually new and digital is the way software both is, and further enables, a kind of separation. It defines an area for users to work in an abstract and open-ended way. “This means that the terms ‘digital media’ and ‘new media’ do not capture very well the uniqueness of the ‘digital revolution.’… Because all the new qualities of ‘digital media’ are not situated ‘inside’ the media objects. Rather, they all exist ‘outside’ – as commands and techniques of media viewers, authoring software, animation, compositing, and editing software, game engine software, wiki software, and all other software ‘species.’” (149)
The user applies software tools to files of specific types. Take the file of a digital photo, for example. The file contains an array of pixels that have color values, a file header specifying dimensions, color profile, information about the camera and exposure and other metadata. It’s a bunch of – very large – numbers. A high def image might contain 2 million pixels and six million RGB color values. Any digital image seen on a screen is already a visualization. For the user it is the software that defines the properties of the content. “There is no such thing as ‘digital media.’ There is only software as applied to media (or ‘content’).” (152)
New media are new in two senses. The first is that software is always in beta, continually being updated. The second is that software is a meta-medium, both simulating old media tools and adding new ones under the cover of the familiar. Yet a third might be the creation not of new versions of old media or new tools for old media but entirely new media forms – hybrid media.
For instance, Google Earth combining aerial photos, satellite images. 3D computer graphics, stills, data overlays. Another example is motion graphics, including still images, text, audio, and so on. Even a simple website can contain page description information for text, vector graphics, animation. Or the lowly PowerPoint, able to inflict animation, text, images or movies on the public.
This is not quite the same thing as the older concept of multimedia, which for Manovich is a subset of hybrid media. In multimedia the elements are next to each other. “In contrast, in media hybrids, interfaces, techniques, and ultimately the most fundamental assumptions of different media forms and traditions, are brought together, resulting in new media gestalts.”(167) It generates new experiences, different from previously separate experiences. Multimedia does not threaten the autonomy of media, but hybridity does. In hybrid media, the different media exchange properties. For example, text within motion graphics can be made to conform to cinematic conventions, go in and out of ‘focus’.
Hybrid media is not same as convergence, as hybrid media can evolve new properties. Making media over as software did not lead to their convergence, as some thought, but to the evolution of new hybrids. “This, for me, is the essence of the new stage of computer meta-medium development. The unique properties and techniques of different media have become software elements that can be combined together in previously impossible ways.” (176)
Manovich thinks media hybrids in an evolutionary way. Like Franco Moretti, he is aware of the limits of the analogy between biological and cultural evolution. Novel combinations of media can be thought of as a new species. Some are not selected, or end up restricted to certain specialized niches. Virtual Reality for example, was as I recall a promising new media hybrid at the trade shows of the early 90s, but it ended up with niche applications. A far more successful hybrid is the simple image map in webdesign, where an image becomes an interface. It’s a hybrid had of a still image plus hyperlinks. Another would be the virtual camera in 3D modeling, which is now a common feature in video games.
One might pause to ask, like Galloway, or Toscano and Kinkle, whether such hybrid media help us cognitively map the totality of relations within which we are webbed. But the problem is that not only is the world harder to fathom in its totality, even media itself recedes from view. “Like the post-modernism of the 1980s and the web revolution of the 1990s, the ‘softwarization’ of media (the transfer of techniques and interfaces of all previously existing media technologies to software) has flattened history – in this case the history of modern media.” (180) Perhaps it’s a declension of the fetish, which no longer takes the thing for the relation, or even the image for the thing, but takes the image as an image, rather than an effect of software.
Software is a difficult object to study, in constant flux and evolution. One useful methodological tip from Manovich is to focus on formats rather than media artifacts or even instances of software. At its peak in the 1940s, Hollywood made about 400 movies per year. It would be possible to see a reasonable sample of that output. But Youtube uploads something like 300 hours of video every minute. Hence a turn to formats, which are relatively few in number and stable over time. Jonathan Sterne’s book on the mp3 might stand as an exemplary work along these lines. Manovich: “From the point of view of media and aesthetic theory, file formats constitute the ‘materiality’ of computational media – because bits organized in these formats is what gets written to a storage media…” (215)
Open a file using your software – say in Photoshop – and one quickly finds a whole host of ways in which you can make changes to the file. Pull down a menu, go down the list of commands. A lot of them have sub-menus where you can change the parameters of some aspect of the file. For example, a color-picker. Select from a range of shades, or open a color wheel and choose from anywhere on it. A lot of what one can fiddle with are parameters, also known as variables or arguments. In a GUI, or Graphic User Interface, there’s usually a whole bunch of buttons and sliders that allow these parameters to be changed.
Modern procedural programming is modular. Every procedure that is used repeatedly is encapsulated in a single function that software programs can evoked by name. These are sometimes called subroutines. Such functions generally solve equations. Ones that perform related functions are gathered in libraries. Using such libraries speeds up software development. A function works on particular parameters – for example, the color picker.
Softwarization allows for great deal of control of parameters. “In this way, the logic of programming is projected to the GUI level and becomes part of the user’s cognitive model of working with media inside applications.” (222) It may even project into the labor process itself. Different kinds of media work become similar in workflow. Select a tool, choose parameters, apply, repeat. You could be doing the layout for a book, designing a building, or editing a movie or preparing photos for a magazine.
Manovich: “Of course, we should not forget that the practices of computer programming are embedded within the economic and social structures of the software and consumer electronics industries.” (223) What would it mean to unpack that? How were the possibilities opened up by Alan Kay and others reconfigured in the transition from experimental design to actual consumer products? How did the logic of the design of computation end up shaping work as we know it? These are questions outside the parameters of Manovich’s formalist approach, but they are questions his methods usefully clarify. “We now understand that in software culture, what we identify by conceptual inertia as ‘properties’ of different mediums are actually the properties of media software.” (225)
There’s a lot more to Software Takes Command, but perhaps I’ll stop here and draw breath. It has a lot of implications for media theory. The intermediate objects of such a theory dissolve in the world of software: “… the conceptual foundation of media discourse (and media studies) – the idea that we can name a relatively small number of distinct mediums – does not hold any more.” (234) Instead, Manovich sees an evolutionary space with a large number of hybrid media forms that overlap, mutate and cross-fertilize.
If one way to navigate such a fluid empirical field might be to reduce it to the principles of hardware design, Manovich suggests another, which takes the relative stability of file formats and means of modifying them as key categories. This reformats what we think the ‘media’ actually are: “… a medium as simulated in software is a combination of a data structure and a set of algorithms.” (207)
There are consequences not only for media theory but for cultural studies as well. Cultural techniques can be not only transmitted, but even invented in software design: “a cultural object becomes an agglomeration of functions drawn from software libraries.” (238) These might be embedded less in specific software programs than in the common libraries of subroutines on which they draw to vary given parameters. These design practices then structure workflow and workplace habits.
Software studies offers a very different map as to how to rebuild media studies than just adding new sub-fields, for example, for game studies. It also differs from comparative media studies, in not taking it for granted that there are stable media – whether converging or not – to compare. It looks rather for the common or related tools and procedures embedded now in the software that run all media.
Manovich’s software studies approach is restricted to that of the user. One might ask how the user is itself produced as a byproduct of software design. Algorithm plus data equals media: which is how the user starts to think of it too. It is a cultural model, but so too is the ‘user’. Having specified how software reconstructs media as simulation for the user, one might move on to thinking about how the user’s usage is shaped not just by formal and cultural constraints but by other kinds as well. And one might think also about who or what gets to use the user in turn.
4 thoughts on “On Manovich”
Good read, but really lots of typos, misspellings, etc. Sup
Are volunteering to proofread?
You gave me a great laugh, Ken. Keep writing; orthography be damned! That’s why we call Letters “unfiltered”…
Public Seminar is a low-budget operation giving its work away free. So you get what you get. If ever i put together a book based on these, i promise a high level of sub editing. But then you’ll be expected to pay for that labor.