The Prospect of Butlerian Jihad Responding to anti-tech structures of feeling
Liam Mullally mar 2026 · essay
Moscow Polytechnical Museum. Photo: Mikhail (Vokhabre) Shcherbakov.

Our opinion is that war to the death should be instantly proclaimed against them. Every machine of every sort should be destroyed by the well-wisher of his species. Let there be no exceptions made, no quarter shown; let us at once go back to the primeval condition of the race. If it be urged that this is impossible under the present condition of human affairs, this at once proves that the mischief is already done, that our servitude has commenced in good earnest, that we have raised a race of beings whom it is beyond our power to destroy, and that we are not only enslaved but are absolutely acquiescent in our bondage. — Samuel Butler, Darwin Among the Machines (1863)
Then came the Butlerian Jihad — two generations of chaos. The god of machine-logic was overthrown among the masses and a new concept was raised: “Man may not be replaced.” — Frank Herbert, Dune (1965)

In the ecology of global capitalism — or at least in the Western hemisphere — American technological capital is ascendant. In the U.S. and in Europe, inflationary economies languishing without growth are increasingly looking to these tech firms to boost GDP, and are happily making deals and concessions to achieve this.

Strangely, though, public perception of these companies does not appear to match official attitudes. Most of all, in the places where policy circles pin the most hope upon artificial intelligence and other emergent technologies — in North America, in the UK and in Europe — public perception is at its most critical. Recent polling by Ipsos has, for instance, found that people in the Anglosphere are the most likely to be nervous and least likely to be excited about AI, followed by people in Europe — with those living in South America and Asia more likely to be positive about AI.1 Similar findings have been reached by Pew Research, who find that 50% of Americans are more concerned than excited about AI, and only 10% more excited than concerned.2 Similarly, KPMG has found that those living in Nigeria, India and China are the most likely to view AI as trustworthy, with Anglophone and European countries least likely.3

This perception has also been reflected in pop culture. For instance, controversy has sprung up in recent months over the video game Clair Obscur: Expedition 33, after it was discovered that generative AI had been used to make placeholder assets for the game, some of which made it into the initial release. The Indie Game Awards, which had awarded the game Game of the Year and Best Debut Game, rescinded its awards, and the developers hurried to explain that their use was minimal, and to assure players that in future games, “everything will be made by humans.”4 More recently, Bandcamp has also banned music made partially or in full by generative AI from being distributed on its platform — accompanied by a blog post titled “Keeping Bandcamp Human”.5 In recent years, actors in the SAG-AFTRA union have gone on strike, explicitly demanding limitations to the use of AI by their employers.6

In these statistics and anecdotes, we can see a turn of sentiment against Big Tech and the direction it has set for the cultural economy in particular. This sentiment might be characterised as what cultural theorist Raymond Williams called a structure of feeling — a widespread affective condition that manifests through experience and culture, but remains essentially pre-political and fragmentary.7 It might include, for instance: a view that technical products and services are becoming worse, a process that Cory Doctorow has appropriately called enshittification; the breaking out of disputes over automation at work; anxieties about automation in the economy; anxieties or distaste for surveillance at work and in public; negative associations with prominent individuals like Elon Musk or Peter Thiel; distaste or distrust for Silicon Valley’s science-fiction inflected vision of the future; a perceived gap between the hype surrounding technology and its actual utility; a related view that apparent innovations are not innovations at all, but snake oil: scams, that is, designed to earn quick profits via deception.8

The question of whether this structure of feeling will be actualised as politics, and of what kind of politics it will become if it is, remains open. Union organising around the issue is growing, but remains largely economistic — concerned with the specific terms and conditions of members exposed to automation, that is — rather than political. To date, the current that has come closest to voicing the political has been the resurgence of interest in the Luddites: the 19th-century English textile workers who smashed the mechanical looms that were making them redundant. Self-described neo-Luddites argue we should take inspiration from the Luddites and launch a popular revolt against new automation technologies.9 In recent years, Brian Merchant — author of Blood in the Machine (2023), a popular history of the Luddites — has organised a series of interactive “tribunals”, in which technologies are put on trial and smashed on stage if the panel finds them wanting.10 While Merchant is always careful to frame Luddism as a considered rebellion against the exploitative technologies of big tech, the neo-Luddites, here and elsewhere, tend to direct their anger towards technical objects themselves.

It is not yet clear how new technical infrastructures, such as hyperscale data centres, will structurally reform our economy and society, and what impact this will have on the availability and forms of political action. In the commercial press at least, the idea of techno- or neo-feudalism has become an increasingly popular framework for thinking this shift through.11 While versions of this thesis differ, they share the general proposition that new forms of technology are driving us out of capitalism, and that this movement is taking on the form of a regression towards feudal social relations (or at least something similar), rather than a progression to something totally new. Under techno-feudalism, capitalists are being displaced by lords who leverage their ownership of infrastructures to extract value from disempowered consumers and workers, themselves now closer to serfs or peasants than a conventional proletariat.

Techno-feudal theses tend to fall apart under close scrutiny: as Evgeny Morozov has argued, for instance, the forms of dispossession and expropriation contemporary writers often associate with “feudalism” have frequently existed in the history of capitalism, especially beyond the imperial core, and the structural position of the user today is quite different to that of the feudal serf.12 Most critics will concede, however, that techno-feudalists do point to real and substantial shifts in the technological and economic basis of capitalism. Jeremy Gilbert has argued compellingly that this constitutes not the end of capitalism as such, but the end of neoliberal capitalism and the start of a new regime of production, which he calls, quoting Nick Srnicek, platform capitalism.13


Here, I am less interested in entering the debate about techno-feudalism’s theoretical merits, which has been somewhat exhausted, and more interested in the particular constellation of attitudes towards technology floating around in conversations such as those about techno-feudalism and Luddism. One could add the debates between so-called eco-modernist and degrowth Marxists to this constellation — in which dividing lines have unhelpfully been drawn between a hapless embrace of capitalist growth on the one hand and a rejection of technology on the other.14 I am speaking here not just of academic debates or of commercial literature, but of podcasts, blogs, online discourse and everyday conversations — of the hard-to-pin-down space of affect and sentiment, of the aforementioned structure of feeling.

I’m sympathetic to these positions, and in particular to the attention they bring to the malign influence of technological capital today. To the extent that neo-Luddites bring critical attention to technology, they are doing useful work. But this anti-tech sentiment frequently cohabitates with something uneasy: the treatment of technology as some abstract and impenetrable evil, and the retreat, against this, into essentialist views of the human. In such a move, there is a danger of falling into fantastical thinking, of rallying to defend a romantic view of humanity from the corruption of machines. For reasons I will clarify shortly, this would be fundamentally misguided and serve only to distract us from the actual problem — not technology, but capitalism itself.

This brings me to Frank Herbert’s novel Dune (1965) and its sequels, the ostensible subjects of this article. Dune is premised on a technological fable — of humanity’s destruction of machines — and offers some parallels to the ideas of both the neo-Luddites and the Techno-feudalists, as well as to a third not yet mentioned narrative: Silicon Valley’s own self-mythology — the fight for or against civilisation ending Artificial General Intelligence (AGI). Thinking through this aspect of Dune can help to clarify the possibilities and limitations of the anti-tech structure of feeling.


Among those science fiction novels (and other media) that have captured the imagination of Silicon Valley, Dune is a distinct and curious example. It does not offer a critique of contemporary consumer capitalism — as in Neuromancer (1984) or Snow Crash (1992) — or a vision into a utopian post-capitalist society, as in Star Trek; instead, it depicts a future in which the economic and social systems of the past prevail — in which the future has unfolded as regression.

Herbert’s original six novel cycle — not to mention the numerous titles subsequently written by his son — lays out a sprawling historical epic that unfolds through the political machinations of feudal houses. Technologically, the world of Dune may be more advanced than our own, but the forms of technology described in it are so distinct and alien as to more closely resemble the magic of fantasy. In these things, Dune is a classic example of “space opera”.15

More than other forms of science fiction, space opera, like fantasy, is interested in world-building, in expanding and experimenting with its cosmos, and with history. This, more than his dense prose, is where Herbert excels as a writer. Dune appeals to its reader through maps and appendices as much as in narrative, while its epochal timescales allow generations, ecology and even geology to operate as narrative concerns. Dune’s past-future, in which capitalism has long since been abandoned for feudalism, is, as such, built on a narrative conceit from the novels’ distant past and their readers’ not-so-far future: the “Butlerian Jihad”, or a revolutionary rejection of “thinking machines” (i.e. mechanical or electronic computers).16

This is a millenarian moment in the world of Dune, in which its history departed from our future. The motivations for the Butlerian Jihad are ambiguous: it is stated that humans had been made marginal by thinking machines that out-developed them and therefore subjugated them, but also implied that it was a counter-revolution against emerging technical classes.17 Either way, the Butlerian Jihad became a bloody, sustained conflict, out of which the feudal Imperium of the novels emerged. In place of thinking machines, eugenics and chemical intervention allowed for the creation of mentats, or human computers capable of advanced mathematics; navigators, who could steer space ships through interstellar travel; and secretive groups such as the Bene Gesserits, that influence the religion and politics of Dune’s cosmos, and so on. What matters here, though, is that the Butlerian Jihad serves as a sort of genesis myth within Dune, one that creates the necessary conditions for a return or continuation of feudalism into the far future.

This basic conceit — a reversion to feudalism via the rejection of technology — makes Dune’s Butlerian Jihad a provocative counterpart to the technofeudal thesis. Like a warped funhouse mirror, feudalism returns not through the escalation of technological exploitation, as in technofeudalism, but instead through its rejection. In a historical conjuncture defined by the ascendence of technological capital, the notion of a Butlerian Jihad has become newly compelling. It resonates with the exact structure of feeling I described at the outset of this article, against the ascendance of AI and American tech magnates like Musk and Thiel. Indeed, alongside Luddism, the idea of Butlerian Jihad has gained some traction in the past few years, inspiring a growing number of blogs, academic papers and opinion pieces advocating for a Butlerian Jihad against AI.18

But Herbert’s relevance is primarily in the fact that, as a reactionary advocate of revolution against machines, he can draw out those aspects of the conceptual space of anti-tech sentiment which are most uneasy, and most dangerous.


Herbert’s Butlerian Jihad may be fictional, but its eponymous originator — Samuel Butler — is not. Butler was a 19th-century English novelist, best known for his utopian satire Erewhon (1872), in which the citizens of Erewhon destroy and outlaw machines fearing subjugation by superior mechanical consciousness. Butler adapted this aspect of his novel directly from a series of letters he published a decade earlier in the New Zealand newspaper, The Press.

The first of these letters, “Darwin Among the Machines”, outlines Butler’s theory of technology:

We refer to the question: What sort of creature man’s next successor in the supremacy of the earth is likely to be. We have often heard this debated; but it appears to us that we are ourselves creating our own successors; we are daily adding to the beauty and delicacy of their physical organisation; we are daily giving them greater power and supplying by all sorts of ingenious contrivances that self-regulating, self-acting power which will be to them what intellect has been to the human race.19

Butler was heavily inspired by Charles Darwin’s then-recent findings on evolution.20 Like many contemporaries, Darwin’s discoveries led Butler to anxiety about humanity’s place on earth, which could no longer be viewed as exceptional — at least not in theological terms.21 Butler’s proposition was that, having reached supremacy in the sphere of biological evolution, humans had instigated a process of technological evolution in which they would inevitably be exceeded, even domesticated. Humanity would cease to be the master of the world, and become — like animals, like nature — “bound down as slaves”.

The parallels between Butler’s anxieties and Silicon Valley’s AGI millenarianism — its suggestion that machine intelligence might exceed that of humans, and that this might be an apocalyptic scenario — are striking, and not coincidental.22 Butler is cited, for instance, by Alan Turing, in a short essay in which he discussed the possibility of future intelligent machines.23 The idea of autonomous machine evolution reappears in the mid-20th century in John von Neumann’s writing on self-replicating automata, which became the basis of theories of a technological singularity, a limit point beyond which machine development surpasses human capability and control.24 These references fueled the imaginations of generations of science-fiction authors: Isaac Asimov’s I, Robot (1950); Arthur C. Clarke’s 2001: A Space Odyssey (1968); and Phillip K. Dick’s Do Androids Dream of Electric Sheep? (1968), among many others. As large language models have got better at reproducing human communication, it is this science fiction which has guided the understanding of Silicon Valley CEOs and thought leaders. One consequence is the emergence of a global “AI safety” industry, channelling state and philanthropic resources into researching “AI risks”, up to and including human extinction.25 If the development of AI is “left unchecked”, one philanthropy-aligned campaign suggests, “it will become increasingly difficult to exert meaningful control in the coming years.”26 And if these declarations do sometimes acknowledge economic transformation and cultural shifts, such concerns always come second to science-fiction-inflected paranoia about a loss of control, expressed in remarkably similar terms to those used by Butler. It has become cliched to point out that the word robot derives from the Czech word robota, meaning “forced labour” — but knowing what we do about the relationship between big tech and its workers, it is perhaps unsurprising they are also concerned that their AI might go on strike.27

Butler’s logic, as that of many who follow him, is explicitly supremacist. It describes the current position of humanity as one of “supremacy of the earth”, and suggests that losing this would necessarily mean subjugation by another. It is not at all coincidental that he wrote his essay from within the British Colony of New Zealand; indeed, one is left to wonder who he would have included within his view of “man”.28 Post- and decolonial writers have demonstrated the extent to which natives were excluded from the category of “the human” in the colonial situation, and the extent to which this was used to justify the dominion of European settlers over natives. Butler’s theory of technology follows on from a view that the subjugation of the world — and of other humans with it — is not only morally justifiable but a necessary good. And by positioning all relationships in terms of control and domination, it denies the actual interdependence and complex forms of agency between humans, technology and non-human nature.

This is not to suggest that self-described neo-Luddites are tacitly endorsing the British Empire or colonialism. But several of Butler’s assumptions do appear to have become popular among both AGI doomers and neo-Luddites: that technology is something apart from humanity; that technology has begun to corrupt a pure or romantic vision of the human; that the destruction of technology would entail liberation from exploitation and allow for the full flourishing of the human.

A more useful theory of technology has been offered by Bernard Stiegler, who suggests that human subjectivity does not exist apart from or before technology, but has in fact always been completed by it — what he calls the “originary technicity” of the human.29 For Stiegler, the formation of human subjectivity cannot be reduced to an individual genesis at birth: “A newborn child arrives into a world in which tertiary retention [technological memory] both precedes and awaits it, and which, precisely, constitutes this world as world.”30 Subjectivity, in other words, comes from outside the body as well as within it. And, for this reason, the human is not an immutable, a priori thing, but is subject to a high degree of historical and technological contingency. Shifts in technologies — especially those of memory, perception and communication — entail novel humans who experience being in qualitatively different terms.

This conception of technology muddies the idea of the human, opening it up to historical development.31 If “humanity” is not a thing-in-itself, but historically, socially and technically mutable, then the sphere of possibility of the human and of our world becomes much broader. Our relationship to the non-human — to technology or to nature — does not need to be one of control, domination and exploitation. In fact, the understanding of it as such is highly specific to the logic of capitalist and colonial exploitation and extraction. Both Butler’s and Silicon Valley’s fear of being dominated or controlled by machines is itself downstream of attempts to dominate and control the non-human world. Yet, since we rely on the non-human world for our continued existence, this goal is one which can never be achieved — and which inevitably leads to violent paranoia.

Returning to Dune, it is interesting to note that the Butlerian Jihad is not a revolt against exploitation as such — since Dune’s world is drenched in fantastical exploitation — but a defence of a human monopoly on exploitation; or more precisely, of specific classes of humans’ monopoly on exploitation. We should ask: were it possible, what would come after our own Butlerian Jihad? Would it be a more democratic, more redistributive, more caring society in which we can all flourish? Would it be closer to the world of Dune, in which the strict hierarchies of the distant past return? Most likely it would simply mean a return to more of the human, face-to-face forms of exploitation that prevailed in previous decades.


Even if we don’t need to worry about feudalism, we do still need to worry about capitalism, which is currently taking on novel forms.32 The most plausible situation is this: we are moving into a new regime of production, led by the wing of capital invested in technological development. This wing is now busy building infrastructures and weaving its way into the state, the military and much of the economy. The extent and reach of computing into all spheres of the economy, communication and everyday life are unprecedented; they are being arranged in radically new ways; new forms of automation are being trialled, as are new techniques for deriving profits from human activity.

As calls for a fight back against technology grow, the left needs to carefully consider what it is advocating for. Are we fighting the exploitation of workers, the hollowing out of culture and the destruction of the earth via technology, or are we rallying in defence of false visions of pure, a-technical humanity? The former will be necessary, but the latter is an ontologically confused dead-end. The fight against technology as such will do little to resolve the fundamental problem of exploitation, since this originates in a human willingness to exploit — not an individual moral willingness, but an economic propensity embodied in technology as well as in social relations.

This emerging regime of production demands clear and concerted attention to emerging technologies: a hard-nosed digital materialism that banishes any magical thinking and focuses on the actual dangers and possibilities of our present. This should include, where possible, the support and development of alternative technological spaces — the free and open — including those which do not yet hold a radical conception of themselves.

British Cultural Studies has tended to take a suspicious view of close attention to technology. In Raymond Williams’ classic study Television: Technology and Cultural Form, for instance, he argued that technology is “looked for and developed [by capital] with certain purposes and practices already in mind”, and therefore narrowly aligned to the interests of capital rather than to some autonomous sphere of progress.33 In very similar terms, artist and neo-luddite Molly Crabapple has claimed in an interview with the Guardian that “technological development is shaped by money, it’s shaped by power, and it’s generally targeted towards the interests of those in power as opposed to the interests of those without it.”34 Both Williams and Crabapple are right to suggest that technological development is not autonomous, but is, in fact, steered by interests and investments. It does not, however, follow from this that capital has total command over its technologies, or that they are useless to the left. It certainly does not follow that understanding the mechanisms of technology is irrelevant to effectively combating Big Tech.

The anti-tech structure of feeling is there for the taking. But if it is to lead anywhere, it must be taken carefully: a fightback against technological exploitation will be found not in the complete rejection of technology, but in the short-circuiting of one kind of technology and the development of another. The key fights of the coming years will not be fought between humanity and machines, but between capital and whatever social coalition can form against it. Technology will be a key terrain in this conflict: one which, if we give up, we will already have lost.

Notes

  1. The IPSOS AI Monitor, 2025. [^]

  2. Pew Research Center, How People Around the World View AI, 2025. [^]

  3. KPMG, Trust attitudes and the use of artificial intelligence, 2025. [^]

  4. Patricia Hernandez, “After GOTY Pull, Clair Obscur devs draw line in sand over AI”, Polygon, 24 December 2025*.* [^]

  5. Bandcamp, “Keeping Bandcamp Human”, 13 January 2026. [^]

  6. SAG-AFTRA, “SAG-AFTRA A.I. Bargaining And Policy Work Timeline”; SAG-AFTRA, “SAG-AFTRA Strikes Video Games over AI”, 16 August 2024. [^]

  7. Though he used the term earlier, Williams first theorised structures of feeling in The Long Revolution (p.48). [^]

  8. Cory Doctorow, “The ‘Enshittification’ of TikTok”, Wired, 23 January 2023. [^]

  9. Brian Merchant, “I’ve always loved tech. Now, I’m a Luddite. You should be one, too.”, The Washington Post, 18 September 2023. [^]

  10. Sheelah Kolhatkar, “Revenge of the Luddites!”, The New Yorker, 23 October 2023. [^]

  11. Yanis Varoufakis and Cédric Durand have both called this technofeudalism; Jodi Dean calls it neofeudalism, and Mariana Mazzucato digital feudalism. See: Yanis Varoufakis, Technofeudalism: What Killed Capitalism, 2023; Cedric Durand, How Silicon Valley Unleashed Techno-feudalism: The Making of the Digital Economy, 2024; Jodi Dean, Capital’s Grave: Neofeudalism and the New Class Struggle, 2025; Mariana Mazzucato, “Preventing Digital Feudalism”, Social Europe, 2019. [^]

  12. Evgeny Morozov, “Critique of Techno-Feudal Reason”, New Left Review, Jan–April 2022. [^]

  13. Jeremy Gilbert, “Techno-feudalism or platform capitalism? Conceptualising the digital society”, European Journal of Social Theory, 2024. [^]

  14. For an ecomodernist perspective, see: Leigh Phillips, “Degrowth Is Not the Answer to Climate Change”, Jacobin, 1 August 2023; for a degrowth perspective, see: Kohei Saito, Marx in the Anthropocene: Towards the Idea of Degrowth Communism, 2023. For a post-mortem on the debate, see: Kai Heron, “Forget Eco-Modernism”, Verso Blog, 2 April 2024. [^]

  15. The Star Wars film franchise, the most commercially successful space opera, borrows from it extensively, though has inverted its past-future into a future-past — a long time ago, in a galaxy far far away. [^]

  16. Frank Herbert, Dune, 1965. [^]

  17. For the Butlerian Jihad presented as class struggle between an aristocratic and technical class, see: Frank Herbert, Children of Dune, 2008 (p.126). [^]

  18. See, e.g.: Michael Cuenco, “We Must Declare Jihad Against A.I.”, Compact, 28 April 2023; Megan McArdle, “Banning AI saved humanity in ‘Dune.’ So why can’t this work for us?”, The Washington Post, 11 May 2023; Edward Ongweso Jr., “On the Origins of Dune’s Butlerian Jihad”, The Tech Bubble, 19 September 2025; Syed Mustafa Ali, “A Butlerian Hauntology”, ReOrient, 2025; Albert Burneko, “Butlerian Jihad Now”, Defector, 2025. [^]

  19. Samuel Butler, “Darwin Among the Machines”, The Press, 13 June 1863. [^]

  20. Darwin’s On the Origin of Species was published just five years earlier. [^]

  21. See, e.g.: George Levine, Darwin and the Novelists: Patterns of Science in Victorian Fiction, 1988. [^]

  22. Ilya Sutskever, co-founder and chief scientist at OpenAI, is reported to have claimed that they are “definitely going to build a bunker before [they] release AGI”. See: Zoe Kleinman, “Tech billionaires seem to be prepping. Should we all be worried?”, BBC News, 10 October 2025. [^]

  23. Turing typically treats these machines with fondness and not concern. See: Alan Turing, “Intelligent Machines, A Heretical Theory”, 1951. [^]

  24. John von Neumann, Theory of Self-Reproducing Automata, 1966. [^]

  25. Center for AI Safety, “Statement on AI Risk”, 2023. [^]

  26. AI Red Lines, “We urgently call for international red lines to prevent unacceptable AI risks”, 2025. [^]

  27. Generally attributed to Karel Čapek’s science fiction play, Rossum’s Universal Robots, 1920. [^]

  28. See, for instance: Frantz Fanon, The Wretched of the Earth, 1963 or Walter Mignolo and Catherine Walsh, On Decoloniality: Concepts, Analytics, Practice, 2018. [^]

  29. This is the titular “fault” of Epimetheus — a lack that must be completed by technology — from Steigler’s best known text, Technics and Time, 1: The Fault of Epimetheus, 1994. See also: Katherine Hayles, How We Became Post-Human, 1999. [^]

  30. Bernard Stiegler, For a New Critique of Political Economy, 2010 (p.9). [^]

  31. Hence, when Donna Harraway argued “I would rather be a cyborg than a goddess”, she was making a claim for historical agency and against essentialist notions of gender; Donna Harraway, “A Cyborg Manifesto: Science, Technology and Socialist-Feminism”, Socialist Review, 1985. [^]

  32. See: Editorial, “The Technology Question Today”, Disjunctions, 2025. [^]

  33. Raymond Williams, Television: Technology and Cultural Form, 1975 (p.14). [^]

  34. Tom Lamont, “‘Humanity’s remaining timeline? It looks more like five years than 50’: meet the neo-luddites warning of an AI apocalypse”, The Guardian, 2024. [^]