“Fire” from The Four Elements and Temperaments (ca. 1583) | After Maerten de Vos / Rijksmuseum / CC0
This is an inaugural lecture, delivered at an intense, precarious moment. Staff and faculty at the New School for Social Research, including legacy Schools of Public Engagement, and staff and faculty at parts of Lang and Parsons, have been told many will soon be fired. We are members of a university community at a time when antidemocratic, fascist governments in the United States and elsewhere are attacking universities. Many have sacrificed precious time, energy, and peace of mind fighting for this university. My lecture touches on related topics. I’ll close with the importance for pro-democratic defiance of ideals of the original 1919 and refounded 1933 New School for Social Research—that is, ideals of the current NSSR and The New School more generally.
The term “artificial intelligence” (AI) was introduced in 1955 by American computer scientist John McCarthy. Both the academic field and the industry aim to make machines displaying human-level, intelligent activity in most domains. In 2022, when large language models (LLMs) seized the world’s attention and OpenAI’s ChatGPT became, for many, identical to AI, a massive AI-springtime broke out. A cohort of companies secured unprecedented funding while their leaders declared that artificial general intelligence (AGI)—machines capable of displaying human-level intelligent activity in all or most domains—would soon arrive.
This narrative obscures how, in building ever-larger LLMs, big AI companies became responsible for grievous social injustices and environmental damage. A group of philosophers, longtermists, aid the obfuscation by arguing that AGI is, morally, the most important and perhaps also the most dangerous endeavor for humankind. Here I examine the longtermist tradition en route to describing what is being done to the world in AGI’s name and discussing how to philosophize in resistance.
Longtermism’s ancestry in AI
Longtermism emerged from conversations among AI researchers. This gets overlooked because the tradition is described as a future-oriented variant of the utilitarianism-based philanthropic program Effective Altruism (EA). EA was started in 2011 by Oxford philosophers Toby Ord and William MacAskill. It aims to make charitable work and gifts maximally effective. EA’s originators are represented as impressed by a famous 1972 argument of Peter Singer’s about practical implications of applying utilitarian principles to the suffering of the world’s poor, and by Singer’s 2009 proposal for a charitable movement. This leaves out EA’s roots in AI circles.
During the aughts, cofounder of the Berkeley-based Machine Intelligence Research Institute (MIRI) Eliezer Yudkowsky started two blogs devoted to rationalism. Rationalism aims to “improv[e] human reasoning and decision-making” so that there are smart people to create and control intelligent machines. Some in Yudkowsky’s online community wanted to use rationalist ideas to do good, and Yudkowsky once posted about “Effective Altruism.” That was in 2007, four years before Ord and MacAskill named their movement and two years before they founded Giving What We Can and practiced EA avant la lettre. Ord, who by 2009 was active on Yudkowsky’s blog LessWrong, would have been aware both of routes to EA via rationalist notions and of rationality training’s aspiration to control AI systems. By 2006, Ord was collaborating with philosopher Nick Bostrom, who in 2005 had founded Oxford’s Future of Humanity Institute (FHI), an institute partly concerned with AGI’s risks. Although EA wasn’t AI-oriented, it stemmed partly from discussions about AGI-related topics that would preoccupy longtermists, and its fundraising virtuosity derived from its AI ties. Longtermism is, as Mollie Gleiberman argues, the parent, not the child of EA.
Skip to 2017, when MacAskill coined the moniker “longtermism” for an EA-related view championed at Bostrom’s FHI, Ord’s home institution. This view, which Ord defended in a 2020 book and MacAskill championed in his best-selling 2022 What We Owe the Future, is that humankind is at a stage when we could annihilate ourselves or proceed to a radiant future, and that we should prioritize confronting threats to this future. A radiant future is one in which vast numbers of humans’ digital descendants live for billions of years by colonizing the galaxies. Longtermists mostly maintain that anthropogenic threats are likelier than natural ones and that the biggest threat is machines with above-human-level intelligence whose values are unaligned with “ours.” Ord and MacAskill represent themselves as breaking from standard EA and developing longtermism via a future-indexed strain of utilitarianism. They don’t mention that the views that inspired longtermism circulated in Silicon Valley decades earlier.
Bostrom provided the first formulation of longtermism. As a graduate student, he believed technology would render traditional disciplines obsolete. He discovered a like-minded community on an email list managed by “Extropians,” a group of modern transhumanists, and encountered AI-oriented transhumanism. Modern transhumanists hold that genetic engineering, AI, and molecular nanotechnology will enable us to transcend the human condition, augment our intelligence, and overcome disease and aging. Most believe, Alexander Thomas explains, this means merging with machines and colonizing exoplanets. In 1998, Bostrom cocreated the World Transhumanist Association (WTA) and started reformulating transhumanist ideas for analytic philosophers. In 2002, he introduced the now ubiquitous category existential risk for threats to the “potential of humankind to develop” into transhumanists’ envisioned posthumanity. A 2003 article, “Astronomical Waste,” presents a utilitarian case for holding that population expansion through space colonization is so valuable that reducing risks to it should be our top moral priority. In 2014, Bostrom argued, in Superintelligence, that nonaligned superintelligent machines are the biggest existential risk. Having run FHI-Oxford for a decade, he was an establishment figure who had converted academically disreputable Silicon Valley transhumanism into mainstream analytic philosophy. Ord and MacAskill went further with high-profile books on longtermism that omit all mention of transhumanism.
Small surprise that tech leaders received longtermism admiringly. Bill Gates declared he would “highly recommend” Superintelligence; Elon Musk enthused similarly in a 2014 tweet, and OpenAI’s Sam Altman proclaimed the book was “well-worth a read.” In 2022, Musk linked to Bostrom’s “Astronomical Waste,” writing it was “likely the most important paper ever written,” and reposted copy for MacAskill’s What We Owe the Future, declaring it “a close match for my philosophy.” MacAskill reportedly had a massive publicity budget from Facebook cofounder Dustin Moskowitz.
This adulation for longtermism is directed at researchers at institutes AI billionaires fund. For example, Skype cofounder Jaan Tallinn helped found Cambridge University’s Centre for the Study of Existential Risk and the longtermist Future of Life Institute (FLI), the latter receiving $14 million from Musk and getting most of its funding from cryptocurrency Ethereum cocreator Vitalik Buterin, who gave it $650 million in 2021. The funding of such think tanks leverages intellectual credibility and influence at global seats of power.
The scope of the AI-longtermism relationship rarely gets attention. There was a clamor, in 2022, when the crypto exchange FTX declared bankruptcy and, a year later, when its CEO Sam Bankman-Fried was convicted of fraud and conspiracy. It was known that McAskill counselled Bankman-Fried to “earn to give”; that MacAskill advised FTX’s charitable fund; and that the fund pledged millions to EA and longtermist organizations MacAskill led and advised. The scandal of the 2023 unearthing, by philosopher Émile Torres, of a racist email Bostrom wrote in 1996, went largely unregistered.
Springtime for AI
Building “safe” AGI is the framework for AI companies’ jostling in the LLM era. Cofounded in 2010, DeepMind was the first company explicitly dedicated to building AGI. When, in 2015, Musk, Altman, and others founded the nonprofit OpenAI, they championed “artificial general intelligence for the benefit of humanity.” In 2018, Musk left OpenAI, claiming a nonprofit couldn’t succeed, while Altman started a for-profit OpenAI-division to ensure “artificial general intelligence…benefits all of humanity.” When, in 2021, Dario Amodei cofounded Anthropic with other former OpenAI employees, they promised a better approach. In 2023, Musk founded xAI “to build a good AGI,” and, in 2024, Meta’s Mark Zuckerberg joined in.
Preoccupation with AGI makes sense for modern transhumanism. Projecting a future in which technology makes humans super-intelligent, super-long-lived beings who enjoy super well-being, this tradition treats AGI as utopia’s gateway. AGI supposedly leads to computers with above-human-level intelligence, triggering an exponentially increasing cycle of technological progress. This “singularity” is humanity’s route to techno-glory. A consolidation of money and power is required, and, although transhumanism is sometimes represented as politically neutral, and although it has progressive wings, Silicon Valley transhumanism is staunchly libertarian.
These themes circulated in the 1990s in AI forums, like the Extropian email list. One self-designated transhumanist on this list was Ray Kurzweil. Another was Yudkowsky. In 2006, Kurzweil, Yudkowsky, and tech-right leader Peter Thiel started the Singularity Summit, a transhumanism-themed conference. Thiel rejects the label “transhumanism” but has supported transhumanism-dominated organizations, including not just MIRI and the Singularity Summit but the anti-aging Methuselah Foundation and Humanity+, WTA’s successor. An original funder of the transhumanism-aligned Seasteading Institute, Thiel funds “network states.” Thiel’s mentee Altman invests in anti-aging technologies, supports network states, and imagines arriving soon at “space colonization … and high bandwidth brain-computer interfaces.” There’s a similar transhumanist pattern to Musk’s projects.
Until recently, most AI leaders sounded transhumanist themes about AGI’s importance while debating the size of the “existential risk” that an AGI “unaligned” with human values would extinguish humanity. AI leaders were largely “doomers,” maintaining care needs to be taken to avoid lethal AGI and grappling with the “alignment problem.” There is a doomer-extreme represented by Yudkowsky, who thinks the United States should halt AI research and destroy “rogue datacenter[s] by airstrike,” even at risk of triggering a “full nuclear exchange.” Opposing doomers are “accelerationists,” wholly positive about technologies like AI, such as Marc Andreessen of “The Techno-Optimist Manifesto.” Trump’s second term has seen a shift from doomerism to accelerationism.
The rallying-cry of doomers is AI safety, a domain addressing only the “existential risk” of a theoretical rogue AGI. This excludes AI ethics, addressing harms of things like lack of data privacy, algorithmic biases, fake news, hate speech, worker exploitation, extractivist injuries, and devastating energy use. Doomers often combine concern with AI safety with efforts to block non-“safety”-related regulation, since, as “AI godfather” Geoffrey Hinton opines, racial justice and related issues aren’t “as existentially serious as…these [machines] getting more intelligent than us and taking over.”
Though doomers tout safety concerns, they also take AGI, the speculative project of a few wealthy, Global North-based men, to be so important that it outweighs justice and environmental issues. This is the view that receives longtermists’ moral imprimatur.
Longtermist logic and a double defect
Longtermists offer consequentialist arguments that equate value with wellbeing and so count as utilitarian doctrines (see here and here). They presuppose that value is recognizable from an abstract point-of-view-of-the-universe and treat wellbeing as a metric to compare value anywhere, across space to the poor, across species to animals, and across time to the future.
Philosophers situate longtermism in the field of population ethics, which addresses conundrums of future-facing applications of utilitarianism. Longtermists index moral assessment to total aggregate wellbeing, despite recognizing that such total utilitarianism entails the so-called “Repugnant Conclusion,” the conclusion that compared with a population with good quality of life, a much larger population leading barely tolerable lives is morally preferable. Resisting the view that a tie to this conclusion is a reductio, longtermists barrel on.
For longtermists, a situation in which eight billion humans lead meaningful, Earth-bound, democratic existences, for millennia, before peacefully going extinct, is worse, by orders of magnitude, than one in which posthuman trillions, with barely tolerable lives, survive into a cosmic future. (Think of Matrix-situations with humans harvested for their bio-power.) Some regard the latter outcome as important enough to justify mass near-term suffering and mortality. Here longtermist moral logic echoes fascist autocrats and sci-fi villains (e.g., the Marvel Universe’s Thanos) happy to cause mass death to bring on utopian futures.
This logic is longtermists’ gift to AI leaders, who claim AGI is humankind’s bridge to utopia and so a moral priority. Despite doomer-accelerationist disagreement, there is agreement that AGI matters morally more than harms for which it may be responsible.
This is disastrous moral reasoning. Like other consequentialism-based theories, longtermism resembles forms of welfarism that, unable to assess transformative social interventions, presuppose existing social arrangements. Consequentialists cannot meet the objection that limitation to abstract methods that seem to enable us to aggregate values anywhere blocks recognition of the iniquitous social circumstances liberating movements protest.
Consider overlapping antiracist, feminist, and pro-Indigenous movements engaged in struggle against systems inflicting physical, social, and emotional wounds. Many such injustices are invisible apart from the history of the targeted structures. Consequentialism-influenced theories’ quantitative methods cannot illuminate these structures. Worse, these methods can strengthen harmful social mechanisms, which persist partly by hiding their workings.
The failure of longtermist ethics extends to AI leaders’ reliance on it. Projections of the total digital wellbeing in an AGI-enabled techno-future don’t justify neglecting the harms AI is causing now.
A case for doubting that AI companies are hastening toward AGI represents an additional critique. The rationale for investing hope in LLMs turns on thinking these “pure language” models are candidates for natural language understanding. LLMs give the impression of intentional speech through sensitivity to the relative frequency of expressions in their databases. They have weaknesses in accuracy and causal reasoning that are dramatized by studies showing they “collapse” when fed the output of other LLMs and that figure in debates about new approaches to supply worldly “grounding.” Even supposing accuracy gains, there’s a problem with the idea that successful models will self-improve and trigger explosive technological growth. This would require agency, which, on a plausible classic view, involves the ability to step back from impulses to believe or do particular things, and ask whether one should believe or do them. It’s unclear why we should expect LLMs to become “agentic.” Complex systems can have novel emergent features, but, if beliefs about AGI’s imminence refer to such features, they are more religious than scientific.
There is simply no excuse, of the sort AI leaders and longtermists intimate, for overlooking harms of the self-enriching efforts of a small group of men to build ever larger LLMs.
AI’s harms and injustices
Ethics of AI is an expanding discourse addressing things like exploitation of labor in building LLMs, ways current LLMs perpetuate racist and other biases, and ways even unbiased AI tools inflict harm.
This corpus encompasses, inter alia, ways generative AI systems can weaken democracy. These tools sap institutional knowledge because, law professors Woodrow Hartzog and Jessica Silbey argue, they encourage “cognitive offloading and skill atrophy,” “short-circuit institutional decision-making,” and isolate humans “by displacing opportunities for human connection.” Consider the Trump administration’s “Department of Government Efficiency,” or DOGE, intended to modernize “federal technology and software to maximize governmental efficiency and productivity.” Abstracting from ethical breaches in its deployment, DOGE introduced systems that displace expertise and eliminate roles for decision-making, disrespecting the balance between efficiency and other ideals of civic life. There are similarly corrosive deployments of AI technologies into health systems, schools, universities, and legal systems. This shrinks the space in which we see each other acting, register differences, and encounter the need to change. It shrinks the space for democratic life.
AI ethics also details environmental costs of LLMs’ construction and running. AI companies use significant copper and lithium in their data-processing centers. By 2027, their water-use is projected to surpass Denmark’s. By 2030, their energy use is projected to rival Japan’s today. And it is “documented in the literature on environmental racism that the negative effects of climate change [impact] the world’s most marginalized communities first.”
The practices of AI companies that longtermism equips us to (mis)represent as morally justified are practices that corrode democratic community and inflict grievous social and environmental harms, disproportionately hurting marginalized people the world over.
Philosophical morals
Longtermism’s story is about money—in the form of fancy research institutes, jobs, titles, book endorsements, publicity budgets, and government ties—giving immensely damaging ideas a foothold with philosophers and the public. One moral is that worldly appeal doesn’t reliably indicate good ideas.
Longtermism’s story is also about utilitarianism’s many lives. In 1961, Iris Murdoch declared, “There should have been a revolt against utilitarianism.” There are well-known normative critiques of utilitarianism like John Rawls’s 1971 complaint that, in aggregating value, utilitarians don’t “take seriously the distinction between persons.” Rawls inherits Kant’s idea of the unquantifiably precious “dignity” of individuals, which is radical in a world that treats national wealth as consistent with widespread misery. But Rawls embeds his radical idea in an entrenched metaphysic, presupposed by utilitarians, on which the world is drained of value. Given the fit of utilitarianism with dominant politico-economic arrangements, it’s no surprise that, absent a metaphysical revolution, utilitarianism repeatedly returns. In 1961, Murdoch complains that, in Anglo-American philosophy, there has been no revolutionary critique of the metaphysics modern utilitarian theories presuppose. Her still-true observation bears on the longtermism-AI saga.
Murdoch attacks the metaphysical idea that reality is bereft of moral value, which appears to justify treating the world as indefinitely extractable, instrumentalizable, and commodifiable. We need to question this metaphysics to grasp challenges of getting our worldly circumstances in view in normative theory. We fail if we succumb to the pressure, exerted by the metaphysics, to survey things from a “standpoint of the universe.” We must be open to a value-laden world accessible only to thought that explores historical, cultural, ecological, and other perspectives.
This is why we must fight for institutions, like NSSR, dedicated to social understanding that is not reducible to the narrowly factual or merely technical, and that illustrates how the world shifts with new perspectives, opening potentially liberating possibilities. Joining this fight is pivotal for prodemocratic struggle, for democracy’s plurality requires a standing willingness to respond to perspectival provocations. Any innovations, AI-related or otherwise, that overreach in relieving us of the need to confront the dissonance of each other’s viewpoints risk draining democracy’s lifeblood.
The perspectivally demanding thinking of healthy democracies is the kind of thinking required to register the structural racist, ableist, and sexist injuries, and overlapping environmental harms, that AI companies, abetted by longtermism, inflict. It is the kind of thinking required for an apt political response to Big AI’s calamitous enthusiasm, on lurid display in the United States, for dismantling democracy, accommodating fascism, and building network states in which capital is free to exploit workers and ravage the Earth.
This is an excerpt from the Inaugural Walter Eberstadt Professorship Lecture, delivered on March 11, 2026, in Wolff Conference room at NSSR. The lecture itself was an adapted excerpt from my article, “Longtermism is Ideological Cover for the Harms of AI: Philosophy in the Time of Techno-Fascism,” forthcoming in Philosophy, journal of the Royal Institute of Philosophy, in 2026, a portion of which is used here with the journal’s permission.