<?xml version="1.0" encoding="UTF-8"?><?xml-stylesheet href="/pretty-feed-v3.xsl" type="text/xsl"?><rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><title>Disjunctions Magazine</title><description>A new magazine dedicated to the analysis and critique of contemporary technoscience.</description><link>https://disjunctionsmag.com</link><item><title>At Intel</title><link>https://disjunctionsmag.com/articles/at-intel</link><guid isPermaLink="true">https://disjunctionsmag.com/articles/at-intel</guid><description>Organising within and against the semiconductor industry</description><pubDate>Mon, 27 Apr 2026 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;&lt;em&gt;In addition to profiting off the brutalisation of workers across semiconductor supply chains, the microprocessor firm Intel has also long been a key strategic partner to the state of Israel. These ties have only grown since the start of the genocide in Gaza, prompting workers to take action. This is a broad description of the semiconductor industry, followed by an account of organising at Intel.&lt;/em&gt;&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;Semiconductors — such as silicon, germanium, gallium, etc. — are elements around the centre of the periodic table. Their unique physical properties allow them to switch between insulating and conducting electricity, making them the foundation of all computing today; the fundamental building block of mobile devices, personal computers, servers, data centres, cloud computing, AI, and everything else imaginable.&lt;/p&gt;
&lt;p&gt;The semiconductor industry is global, featuring both firms that focus on specific segments of the supply chain and those that handle end-to-end production. This heterogeneity makes it rather challenging to understand the industry and to identify the different actors and regimes of exploitation. This article will discuss a few components of semiconductor supply chains; we urge everyone to use these as pointers for further research and education.&lt;/p&gt;
&lt;p&gt;The very first phase of manufacturing, often not even described as such, is the extraction of resources, of minerals as raw materials. Silicon is the most common such element used in chip production due to its wide availability; it is generally mined from open-pit mines in Latin America, West Africa, and Australia. It then undergoes several stages of purification and refinement in facilities around the world, many of which are in China. Two alternatives to silicon, often used for specialised high-frequency, low-noise applications are gallium (as GaAs) and germanium (as SiGe). The production of gallium, a by-product of bauxite mining in Guinea, India, Indonesia, Brazil, and several other countries, generates large volumes of chemical waste. This mining, which has resulted in the destruction of forests, livelihoods, and ecosystems, has long been opposed by indigenous groups. Other prominent mined materials include copper, cobalt, and aluminium, all of which are used for wiring and interconnections on semiconductor chips. The global supply chains that mediate the extraction of these minerals ultimately obfuscate the extent of exploitation and environmental distress at the point of extraction.&lt;/p&gt;
&lt;p&gt;After the crystallised minerals travel to manufacturing facilities worldwide in wafer form, the semiconductor industry’s major players begin to get involved. The next steps can be broken down into two major phases: design and manufacturing. Firms that do both design and manufacturing are known as Integrated Device Manufacturers (IDMs). Some examples of IDMs include Intel, NXP, Infineon, STMicroelectronics, Analog Devices, Samsung, SK Hynix, and Micron.  There are also firms that restrict their activities to either design or manufacturing. Prominent design-only corporations — so-called &lt;em&gt;fabless manufacturers&lt;/em&gt;, such as Nvidia, Qualcomm, Apple, AMD, Marvell, and Broadcom — each have their own specialised design areas; they do not, however, manufacture their own chips. That task is instead left to another group of firms, known as &lt;em&gt;foundries&lt;/em&gt;, which specialise in fabricating chips. The two main firms here are TSMC and GlobalFoundries; Rapidus is a newer Japanese firm trying to break into the market. Some IDMs, such as Intel and Samsung, also function as “custom” foundries, manufacturing chips for their fabless customers.&lt;/p&gt;
&lt;p&gt;The design phase of production, as the name suggests, involves designing circuits and their interconnections to optimise chip performance and power consumption. Given that chips are made of millions of components, this phase depends heavily on electronic design automation tools. That is where firms such as Cadence, Synopsys, and Siemens come into the picture. These firms occupy a niche market position, developing specialised automation software tools and Intellectual Property cores for use by both IDMs and fabless design firms. In most cases, the design problems addressed here are very traditional computer science-adjacent geometric optimisation problems. Yet this area has recently experienced a major push towards integrating artificial intelligence, and as in many other fields of work, the motivation for this push on the part of management is simply “everyone else is doing this”.&lt;/p&gt;
&lt;p&gt;The so-called &lt;em&gt;front-end&lt;/em&gt; phase of manufacturing uses design masks with nanometer-level precision to create circuitry on circular wafers. This phase is often the most resource-intensive in the production system. In the first place, the lithography machinery used in this phase — manufactured mostly by firms such as ASML, ASM or Applied Materials — is highly technologically advanced and expensive. Second, operating an average-sized fabrication facility also requires huge amounts of electricity and around 35 million litres of water per day; resource consumption levels comparable to those of a medium-sized town. These facilities also require acres of land, which are generally exempted from land and property taxes due to strategic industrial zoning.&lt;/p&gt;
&lt;p&gt;Altogether, this industry has a significant environmental and economic impact on surrounding communities. Chemical waste, including condensed vapours, are emitted into the atmosphere; water shortages are also a serious concern, even though a large portion of the water used by these facilities is intended for recycling. The costs associated with this recycling process are prohibitive. Its effectiveness, which depends upon factors like resource availability, local regulations, and ecological restrictions, is also far from guaranteed. People living near these facilities also often experience utility service disruptions and frequent rate hikes. And even though these facilities tend to have far higher electricity and water consumption than their neighbours, they often pay rates that are significantly subsidised. Colluding with city, state, and federal governments, corporations are often offered preferential access to resources, tax breaks, and opportunities for further land acquisitions. A perfect example of these concerns is Intel’s presence in Hillsboro, Oregon, which many community organising groups have recently been fighting against.&lt;/p&gt;
&lt;p&gt;In addition to these issues, this industry also lacks adequate labour regulations. An example of this is Foxconn, a major manufacturer of iPhones and other electronic gadgets. Already known for their poor working conditions in China, Foxconn has recently expanded its facilities to the outskirts of Bangalore in southern India. These facilities employ large numbers of dormitory-dwelling migrant workers, who are subjected to gruesome working conditions, with little regard for worker protections or rights. Ultimately, because the adverse effects of this industry are experienced by a large section of workers and communities across a diversity of geographies, there is an opportunity for a commonality to emerge: for workers and local organizations to sense the similarity of the forces arrayed against them and, thus, the unity of their struggle; and for solidarity to be extended outside the boundaries of a single facility or location and across the borders of nation states.&lt;/p&gt;
&lt;p&gt;Finally, the last phase of manufacturing, often referred to as the &lt;em&gt;back-end&lt;/em&gt; phase, involves cutting the wafers into specific sizes, followed by testing and packaging chips as finished products. This phase is often carried out by another group of firms known as Outsourced Semiconductor Assembly and Test firms, such as Amkor, KYEC, ChipMOS, and ASE. On the whole, back-end manufacturing is less resource-intensive than the front-end phase; the more immediate issue here is labour conditions. The work required — wiring, testing, assembly, and packaging —  often takes place in South and Southeast Asia. Corporations take advantage of large pools of labour, poverty, corruption, and almost non-existent labour laws in peripheral geographies to squeeze out surplus value from workers in any way they can.&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;Production ends when chips enter the circuits of consumption, becoming part of our phones, computers, or online services. This poses a challenge: how can we demystify the products and services we receive as context-less objects, fetishised commodities separate from the labour and the raw materials that have gone into making them?&lt;/p&gt;
&lt;p&gt;We — a group of current and former Intel employees — began trying to answer this question, coming together under the name &lt;em&gt;United Chips Against Global Exploitation&lt;/em&gt;, or simply UNCAGE. We felt a particular sense of urgency after Israel’s relentless bombardment of Gaza began, given Intel and Big Tech’s prominent economic and material involvement in the occupation of Palestine, and now in genocide. This is not to say that we were ever naïve enough to believe these corporations genuinely cared about the high humanitarian values they espoused. But even the thin pretence afforded by codes of conduct and DEI initiatives was now melting away.&lt;/p&gt;
&lt;p&gt;In the context of Palestine, Intel has long been a key contributor to Israel’s apartheid economy, accounting for almost 2% of the occupying state’s GDP. The firm’s very first overseas location was established in Haifa in 1974; its presence in Israel has only grown since. Today, Intel is the largest tech employer in the country, employing over 50,000 individuals, either directly or indirectly. 17% of these are actively serving in the IDF reserve force. In 2022, the firm’s Israeli branch had exports worth $9b, and received a historic $50b in investment. Fab28, one of Intel’s biggest fabrication facilities since 2008, is located in Kiryat Gat: formerly a Palestinian village known as Iraq al-Manshiyya, only 32 kilometres north-east of Gaza. Even under the 1947 UN Partition plan, a plan that was highly favourable to occupying Zionist forces, this village would legally be part of a Palestinian state. In 1948, the village was home to more than 2,000 Palestinians, who were ethnically cleansed by Israeli forces during the Nakba, despite a truce that was supposedly in effect. This is where Intel, who claims to “conduct business with honesty and integrity”, chooses to build its factories.&lt;/p&gt;
&lt;p&gt;After October 2023, Intel’s hypocrisy became impossible to ignore. Pat Gelsinger, the CEO at the time, made a dramatic live appearance on the firm’s website, almost in tears as he expressed his pain for Israelis; the “most resilient people on Earth”. In the following months, as we bore witness to one of the most brutal genocides the world has ever seen, Intel announced an additional $25b of investment in expanding the Kiryat Gat fab facility. Seeing Israel’s Minister of Finance — the West Bank settler Bezalel Smotrich — use Intel’s investment as political validation was deeply upsetting.&lt;/p&gt;
&lt;p&gt;Intel’s workers tried to engage with company executives using regular channels. They met with HR, the legal department, and government affairs teams. The company did not respond. During Intel’s annual Q&amp;amp;A, employees asked what it would take for Intel to reconsider investments that violate human rights. The norm at these Q&amp;amp;As has been for employees to vote on questions and for the CEO to answer the highest-ranking ones; this year, this question was deliberately skipped, despite receiving the most votes. This was when our group started to follow the lead of fellow tech workers at Google, Microsoft, Amazon, Meta, and others. We established ties with the BDS campaign, in order to pressure the company from the outside and to stress the economic damage their activities in Israel could cause.&lt;/p&gt;
&lt;p&gt;Consequently, on 15 April 2024, there were protests at several Intel campuses — Hillsboro, OR, Chandler, AZ, AND Santa Clara, CA.[^1] Demonstrators, partnering with community organisations, demanded Intel’s divestment from the Israeli state. An employee resigned in order to draw attention to Intel’s lack of response. We also supported efforts by employees and investors to launch a shareholder campaign — an “ethical impact assessment” proposal asking Intel to address its values and clarify the moral implications of its decisions. The board recommended shareholders vote against this proposal, stating that the company’s humanitarian commitments have always been upheld, and that any reevaluation of Intel’s activities in Israel would be economically ill-advised.&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;Ultimately, we at &lt;em&gt;UNCAGE&lt;/em&gt; wanted to build on worker politicisation in the context of the genocide and expand our consciousness-building work to the entire industry. Here, we were confronted with a number of challenges. For one, we found that the scope of consumer pressure was limited, since semiconductor firms technically deliver intermediate products and not consumer products. In addition, worker fragmentation posed a serious difficulty. Fabrication facilities, for instance, tend to feature highly disconnected, hierarchical structures, with workers divided across many roles — engineers, technicians, cleanroom operators, and so on — each with a different relationship to the work process and to management, as well as a different degree of leverage and vulnerability. Many technicians, for example, are hired as temporary contractors, making it harder to include them in employee-driven campaigns. Finally, the constant loss of core organisers due to layoffs, combined with the visa precarity faced by migrant workers, also presented serious hurdles.&lt;/p&gt;
&lt;p&gt;Despite these difficulties, there have been small victories. A few months after they first announced the plan, Intel suspended the $25b investment in the Kiryat Gat facility.[^2] Although they cited economic reasons for the suspension, it is clear that pressure from our campaign also played a part. A few months later, one of Intel’s Hillsboro campuses cancelled an annual “family event” at the very last minute, fearing protests that would expose its complicity. And in 2025, the shareholder proposal organised by workers gained almost 10% of the vote, despite the board’s recommendation. The proposal is meant to be presented again, in slightly altered form, at this year’s shareholder meeting in May.[^3]&lt;/p&gt;
&lt;p&gt;There is still a long way to go, however, and it is important to underscore that if we are to organize effectively, we cannot confine our activities to particular locations or to specific firms and entities. The semiconductor industry relies on a global web of dependencies and interconnections that we must be able to understand and organise across. This was our motivation for constructing a broad map of the semiconductor economy in this article. If we are to be capable of standing against the interests of capitalists and imperialists, we will need far more unity across and between the industry’s various vertices.[^4]&lt;/p&gt;
&lt;p&gt;[^1]:  Nicholas LaMora, “Activists demonstrate at Intel’s Hillsboro campus in protest of Israel factory expansion”, &lt;em&gt;Hillsboro News Times&lt;/em&gt;, 16 April 2024; Dylan Wick­man, “Intel’s sup­port of Israel pro­tested at Chand­ler cam­pus”, &lt;em&gt;Arizona Republic,&lt;/em&gt; 17 April 2024.&lt;/p&gt;
&lt;p&gt;[^2]:  Tobias Mann, “Intel interrupts work on $25B Israel fab, citing need for &apos;responsible capital management’”, &lt;em&gt;The Register&lt;/em&gt;, 10 June 2024.&lt;/p&gt;
&lt;p&gt;[^3]:  Intel Corporation, &quot;Form 8-K Current Report&quot;, &lt;em&gt;United States Securities and Exchange Commission&lt;/em&gt;, 6 May 2025.&lt;/p&gt;
&lt;p&gt;[^4]:  Workers and community members interested in organising with us can reach us at &lt;a href=&quot;mailto:uncage_united@proton.me&quot;&gt;uncage_united@proton.me&lt;/a&gt;.&lt;/p&gt;
</content:encoded></item><item><title>Asymmetric Image Wars</title><link>https://disjunctionsmag.com/articles/asymmetric-image-wars</link><guid isPermaLink="true">https://disjunctionsmag.com/articles/asymmetric-image-wars</guid><description>Or, how I learnt to stop worrying and love the slop</description><pubDate>Sat, 11 Apr 2026 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;&lt;em&gt;The disappearance of the individual subject, along with its formal consequence, the increasing unavailability of the personal style, engender the well-nigh universal practice today of what may be called pastiche.&lt;/em&gt;
 — Frederic Jameson, &lt;em&gt;Postmodernism, or, The Cultural Logic of Late Capitalism&lt;/em&gt; (1991)

&lt;em&gt;War can never break free from the magical spectacle because its very purpose is to produce that spectacle… There is no war, then, without representation, no sophisticated weaponry without psychological mystification. Weapons are tools not just of destruction but also of perception.&lt;/em&gt;
 — Paul Virilio, &lt;em&gt;War and Cinema&lt;/em&gt; (1986)
&lt;/p&gt;
&lt;p&gt;Usually, when an unsuspecting social media user encounters AI-generated imagery in their increasingly contaminated feeds, the response is one of immediate, abject revulsion. It is a digital gag reflex through vomit emojis, a dystopic calculation of the implied energy and water footprint, and a creeping sense of having witnessed not merely a synthetic image, but the death of human culture itself. This visceral response is not misplaced under late, late capitalism. Fredric Jameson famously diagnosed “pastiche” as the cultural symptom of the postmodernist disappearance of individual subjectivity and style — leaving behind only the hollow imitation of dead forms.[^1] The AI image is arguably computational pastiche — or, in the vernacular of the internet, &lt;em&gt;slop&lt;/em&gt; — saturated to its logical endpoint, as style and subjectivity are not merely decentered but statistically dissolved. The revulsion towards such images only intensifies when they originate from fascist quarters, as witnessed in the outrage against Donald Trump’s diabolical “Gaza Riviera” video last year, which trivialised the tragedy of an ongoing genocide through the tastelessness of real-estate speculation.[^2]&lt;/p&gt;
&lt;p&gt;However, we now find ourselves amidst a curious inversion of affect, where computational pastiche seems to have found its parodic potential — something Jameson argued pastiche could never do. A recent wave of AI-generated counterpropaganda videos depicting the U.S.-Israeli war on Iran has captured the anti-war, anti-imperialist imagination in ways that no prior synthetic images have managed. Most prominent among them are the blocky LEGO-style animations, in which plastic caricatures of Trump and Netanyahu peruse the Epstein files, attack schoolchildren in Iran, and are bombarded in retaliation by Iranian missiles — all set to a catchy AI-generated rap soundtrack.&lt;/p&gt;
&lt;p&gt;The theoretical temptation to read these AI videos through Jameson’s understanding of pastiche as simply the “imitation of a peculiar or unique, idiosyncratic style” is understandable.[^3] But such a generalised reading would obscure the specific &lt;em&gt;political&lt;/em&gt; context of pastiche circulating now as counterpropaganda — less as the terminal stage of postmodern aesthetic exhaustion than as a strategic redeployment of pastiche’s formal logic in the service of overt parody. Even amongst critics of generative AI, therefore, these parody videos have been shared and celebrated with collective catharsis: a catharsis that testifies to an overwhelming fatigue with the relentless, one-sided narratives mainstreamed by Western media and by Hollywood.&lt;/p&gt;
&lt;p&gt;As David Robb documents in &lt;em&gt;Operation Hollywood&lt;/em&gt;, the Pentagon has for decades operated a formal script-approval system through which access to military hardware worth billions of dollars is exchanged for editorial control over how the armed forces are portrayed, with liaison officers describing favoured productions as a “commercial” for them.[^4] The consequence of this, as Carl Boggs and Tom Pollard argue in &lt;em&gt;The Hollywood War Machine&lt;/em&gt;, is a cinema structurally integrated into a “culture of militarism” — one that has consistently glamourised imperial violence, from the WWII “good war” genre to the post-9/11 blockbuster that deploys star actors, soaring soundtracks, and technological maximalism to legitimise the warfare state.[^5] Hollywood war filmmaking remains amongst the most capital-intensive genres in the industry. Its sensory overload serves its ideological function, with stars functioning less as artists and more as props for imperial soft power.&lt;/p&gt;
&lt;p&gt;In &lt;em&gt;War and Cinema&lt;/em&gt;, Paul Virilio famously argued that modern warfare is inseparable from cinematic technique, as both rely on what he called the “logistics of perception”.[^6] Weapons, for Virilio, are technologies not just of destruction but of perception, and war cannot break free from the magical spectacle because its very purpose is to manage images and deceive the enemy. For most of the twentieth century, that spectacle was largely a monopoly of the West — industrialised through Hollywood’s alliance with the Pentagon, into an unchallenged ideological machine. It is these asymmetric image wars (or AI wars, if the pun holds) that the counterpropaganda videos emerging from China and Iran have begun to contest in real time. The Western monopoly over death and destruction may remain intact, but its hold over the logistics of perception is increasingly being challenged by a rival storytelling stack. In Iran, this media war has been strategically organised over the past decade by social media-savvy teams of IRGC-aligned young creators, who craft and circulate more relatable messages for global audiences.[^7] We are witnessing a shift in these image wars through the dialectic of slop and spectacle — of pastiche and propaganda — that now operates between Hollywood’s painstaking perfection and the barefaced syntheticity of AI videos.&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;Take, for instance, &lt;em&gt;White Eagle vs. Persian Cat&lt;/em&gt;, an AI-generated short film released by Chinese state media last month that rapidly drew millions of views across platforms.[^8] Produced and distributed via official state media channels, including China Central Television (CCTV), the film deploys the &lt;em&gt;wuxia&lt;/em&gt; aesthetic to frame geopolitical conflict as a martial arts epic, replete with flying swordsmen and gravity-defying stunts, rendered in the hyperkinetic visual style of fantasy film. The nonhuman allegory is sophisticated in its materialist critique. The &lt;em&gt;White Eagle&lt;/em&gt;, draped in stars-and-stripes regalia, represents U.S. imperial overreach; the &lt;em&gt;Persian Cat&lt;/em&gt;, a whiskered warrior drawing on the agility and cunning of feline movement, stands for Iranian resistance. Much of the action unfolds in the Golden Flow Valley, a strategic bottleneck through which flows “black iron essence” — an unmistakable metaphor for oil. The visual relief here emerges from the reterritorialisation of spectacle, as we watch the elaborate, capital-intensive machinery of CGI being turned against U.S. imperialism. After the rapid allegorical relay of recent events, from the assassination of Ali Khamenei to the blockade of the Strait of Hormuz, the film finally ends with the implicit economic vision of de-dollarisation and a post-hegemonic imaginary in which trade is rerouted through alternative corridors of multipolar alignment.&lt;/p&gt;
&lt;p&gt;The video’s viral circulation is evidence of a growing appetite for alternative narratives that refuse the contents and conventions of the Western military-entertainment complex. Scholars have coined the term &lt;em&gt;slopaganda&lt;/em&gt; to describe AI-generated content that combines the “mass personalisation” of recommendation systems with propaganda’s goal of influencing the “decision-making capacities of groups” at unprecedented scale and speed.[^9] The coinage is timely but ideologically constrained, as its empirical examples run almost exclusively from Goebbels to Steve Bannon to Elon Musk. &lt;em&gt;White Eagle vs. Persian Cat&lt;/em&gt; is slopaganda, technically speaking, but the concept does not quite account for the contexts in which generative AI has acquired a distinctive parodic potential against the very Western media apparatus the term was coined to describe. In this case, it is slopaganda with Chinese characteristics.&lt;/p&gt;
&lt;p&gt;The compute economics underwriting this new logistics of perception have shifted both technically and geopolitically. Perhaps it is no coincidence, then, that OpenAI shut down Sora — its AI video-generation platform — in the same week the &lt;em&gt;White Eagle vs. Persian Cat&lt;/em&gt; video was being widely circulated. Sora has reportedly burned through billions in inference costs, generating only a fraction of that in its total lifetime revenue. Such a catastrophic compute-revenue gap has forced OpenAI to not only abandon video generation entirely, but also to prematurely end its recent $1 billion IP-sharing partnership with Disney.[^10] The other major U.S. video generation model — Google’s Veo 3 — survives by gating its upper-tier version behind a $250/month plan, a far cry from Sora’s abortive business model as a social media platform where users could generate and share AI videos with a $20/month subscription.&lt;/p&gt;
&lt;p&gt;In contrast, Chinese video generation models have shown more economic viability through their architectural efficiency and ecosystem integration, despite also operating at a loss. Kling 3.0, owned by Kuaishou, uses a 3D variational autoencoder architecture that compresses space and time together rather than processing frame by frame, simulating physical depth without the computational excess that made Sora’s diffusion transformer unsustainable.[^11] Another popular Chinese model — Seedance 2.0, developed by ByteDance — has narrowed its compute-revenue gap by embedding directly into CapCut’s editing pipeline, thereby integrating video generation into a platform that over a billion users already use daily. These models also benefit from China’s “Eastern Data, Western Computing” policy, which routes intensive computational workloads to low-cost data centres built in the country’s resource-rich western provinces.[^12] Underlying all of this is a structural advantage, where the Chinese state treats AI video less as a speculative consumer product and more as sovereign digital infrastructure, subsidising it accordingly.&lt;/p&gt;
&lt;p&gt;Jurisdictions over intellectual property also differentiate Chinese video models from American ones. Earlier this year, when Seedance 2.0 users generated and circulated a hyper-realistic clip of Tom Cruise and Brad Pitt fighting on a rooftop. The Motion Picture Association condemned the model’s training as IP theft on a massive scale.[^13] Whether through deliberate strategy or regulatory indifference, these models effectively treated Hollywood films as a training commons. While OpenAI had to pursue expensive licensing deals (including its ill-fated billion-dollar partnership with Disney), Chinese firms operated with greater impunity, letting lawyers catch up later. Operating in a kind of safe harbour beyond the immediate reach of U.S. and European IP enforcement, these firms have effectively decommodified Western cultural assets. And rather than halting development in response to Hollywood’s complaints, they have introduced content filters in select international markets while maintaining more permissive models for domestic users.&lt;/p&gt;
&lt;p&gt;My own fieldwork with Indian AI creators has revealed how Chinese AI video models like Kling and Seedance have quietly built a significant user base in India, where creators across the political spectrum prefer to use them because of their cheaper subscriptions and greater copyright latitude. The same tools are mobilised very differently depending on who is using them. Hindu nationalism’s digital foot soldiers use AI video models to generate religious, jingoistic, and Islamophobic content, while counterpublics use it to imagine alternative political and infrastructural futures outside the terms set by the state. What connects these content creators is a distributed relationship with these models, developed through repetition, workarounds, and the painstaking automation of workflows across multiple platforms. The most telling example of generative AI’s disruptive potential for countering state propaganda has come from Dhruv Rathee, one of India’s most prominent liberal critics of the right-wing Modi government. Rathee, who has been working as an AI entrepreneur of late, recently created an AI-generated spoof of &lt;em&gt;Dhurandhar&lt;/em&gt;, a recent Bollywood propaganda blockbuster. This spoof, titled &lt;em&gt;Bhawandar&lt;/em&gt; (“storm”), is a computational parody of the cinematic idiom through which xenophobic politics in India have been gaining cultural legitimacy.[^14]&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;In the ongoing war waged by the United States and Israel, pro-Iranian digital content creators — navigating the murky space between grassroots meme warfare and state-aligned production — have generated similar counterpropaganda videos, almost certainly using Chinese video models. A prominent case is the Iranian student-run channel &lt;em&gt;Explosive Media&lt;/em&gt; (Akhbar Enfejari), which has claimed independence from the state, though its LEGO-style AI videos have also been amplified by Iranian state media.[^15] Working in 24-hour production cycles, the team writes scripts and generates visuals using AI and digital editing tools, producing roughly two minutes of video per day. In one of these clips, blocky toy versions of Donald Trump and Benjamin Netanyahu launch missiles, alongside a character representing the Devil, with the Epstein files cited as the motivation for the attacks.[^16] The animation shifts to scenes of retaliatory Iranian missiles striking Tel Aviv and U.S. outposts in the Gulf, interspersed with toy soldiers returning in flag-draped caskets made of plastic blocks.&lt;/p&gt;
&lt;p&gt;To vernacularise the aesthetics of a toy brand this way is not only to belittle the masculinist grammar of U.S. and Israeli military spectacle, but also to exploit the reach of Western intellectual property against the West itself. LEGO is amongst the most recognisable visual forms for a global audience raised on LEGO playsets, movies, video games, and so on. The Lego Group, a private Danish company, has had longstanding ties to Hollywood through film partnerships with studios like Universal Pictures and Warner Bros. Despite these connections, the company lacks the jurisdictional reach to meaningfully litigate against Iranian creators for infringing copyright, not least because Iran already operates under Western sanctions that restrict its integration into global financial systems.&lt;/p&gt;
&lt;p&gt;IP constitutes the legal-economic architecture through which late capitalism circulates, imitates, and monetises culture. When Jameson famously argued that postmodernism cannibalises past styles through pastiche, he did not consider the late-capitalist enclosure of culture through IP regimes that criminalise unauthorised imitation. For Jameson, pastiche is “without any of parody’s ulterior motives, amputated of the satiric impulse, devoid of laughter.”[^17] However, as evidenced by their widespread circulation and celebration, the reception of LEGO-style AI videos is instead marked by cathartic laughter. They reintroduce the satiric impulse through their excess of fidelity to form, combined with their timely deployment in a context of asymmetrical information warfare. When Iranian creators generate videos referencing the Epstein files, depicting Trump and Netanyahu and Hegseth as LEGO figures killing civilians, they are engaging in a computational pastiche of IP itself — using the West’s own imitative visual culture against itself.&lt;/p&gt;
&lt;p&gt;This mimicry also demonstrates a granular awareness of U.S. politics and visual culture, a striking contrast to U.S. propaganda describing Iran as belonging to the “stone age” or to warmongering U.S. politicians who can hardly locate Iran on a map. Iran’s AI portrayal of the United States as an imperialist, settler-colonial entity with paedophiles in power, therefore, operates through a subversion of the IP regime that controls the circulation of its vaunted images. It accelerates an implosion of pastiche, as the commodity logic of late capitalism begins to cannibalise its own legal superstructure. If the culture industry developed intellectual property to manage and monetise cultural production, these videos show how the commodity form has escaped those enclosures entirely under generative AI.&lt;/p&gt;
&lt;p&gt;This crisis of IP inevitably extends to the simulatability of stardom. Despite decades of prognostications about the decline of stardom, a Hollywood actor remains the primary driver of global box office returns. But as Virilio describes, stars were always “inorganic individuals through an arbitrary selection of indefinitely reproducible common features.”[^18] As an example, he details how Marilyn Monroe was discovered by a US army photographer during the Korean War, and how her body was “at once expandable like a giant screen and capable of being folded and reproduced like a poster, a magazine cover or a centre-spread” — never connected to anything but its own reproducibility.[^19] Hollywood has historically managed this plasticity of the star’s image through contracts, exclusivity agreements, and the fiction of celebrity. In a computational twist, however, Hollywood stars can now be digitally reproduced through a basic prompt, their likenesses captured and simulated without consent. What Virilio identified as the expandable, foldable nature of the photographic star has accelerated into the promptable star — detachable from any original referent and statistically recombinable at will. This threatens not merely a celebrity’s ability to monetise their face, but the entire architecture of value extraction built around star exclusivity. Unsurprisingly, then, Hollywood groups have condemned Bytedance’s Seedance 2.0 for its ability to simulate the industry’s most bankable stars with unauthorised precision.&lt;/p&gt;
&lt;p&gt;Circling back to Iranian counterpropaganda videos, a satirical AI film trailer depicting the ongoing war features Paul Giamatti as Netanyahu, Ian McKellen as Ali Khamenei, Jake Gyllenhaal as Mojtaba Khamenei, Liam Neeson as Trump, Zach Galifianakis as JD Vance, and Judi Dench as Keir Starmer.[^20] This three-minute geopolitical parody anticipates a Hollywood blockbuster told entirely from an oppositional gaze, using Hollywood’s own stars against its imperial narratives. If, as Richard Dyer argued, the star image condenses “contradictions within and between ideologies” into a seemingly coherent individual — ideology made flesh and given a human face — what generative AI has achieved here is to sever that face from the ideological function it was originally built to perform.[^21] Virilio’s “inorganic individual” has, it would seem, become fully computable. The political pliability of Hollywood stars, who can be deployed to humanise imperial violence in one cycle and then redeployed to critique it in the next, now matches the plasticity of synthetic images in the service of real-time counterpropaganda.&lt;/p&gt;
&lt;p&gt;On a tangential note, it is impossible to place the richness of Iranian visual culture — the hypnotic stillness of Iranian realist cinema and the mesmerising symmetry of Persian architecture — next to this rapid churn of AI-generated videos. The aesthetic distance between that long tradition and a LEGO Netanyahu could not be greater. But perhaps this absence is the point. After all, these parodies are not attempting to extend or replace Iranian visual culture; nor are they claiming to be cinema. Rather, as uncanny weapons in asymmetric image warfare, they turn Hollywood’s visual monopoly against itself, simulating the mass and momentum of stardom and spectacle to throw it off balance. The cathartic laughter we experience in watching this pirate appropriation of the entire absurd apparatus — from stars to franchises — is not &lt;em&gt;in spite&lt;/em&gt; of its artificiality, but &lt;em&gt;because&lt;/em&gt; of it. Hollywood, however, is yet to be in on its own joke.&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;Critics and cinephiles tend to be squeamish about AI images, and this discomfort has only intensified as the technology has rapidly moved towards an uncomfortable photorealism. As models have expanded through massive corpora and compute, this hyperscale teleology has also alienated a generation of artists who had found creative possibility in earlier, more erratic models, exploiting the unpredictabilities of the latent space to steer image generation towards a computational surrealism. But the characteristic anatomical errors, synthetic smoothening effects, and kinetic inconsistencies that once made AI images legible as flawed images are fast disappearing.&lt;/p&gt;
&lt;p&gt;To meet the moment, therefore, we must look beyond dismissive connotations of &lt;em&gt;slop&lt;/em&gt;. Though the term usefully captures a widespread aesthetic revulsion towards these images, it risks — as described in this article — flattening differentiated political contexts into an undifferentiated mush of pixels. More specifically, it smuggles in a Euro-American sensorial discomfort toward what postcinema scholar Shane Denson has helpfully theorised as “discorrelated images” — computational images that have slipped free of the perceptual and temporal scales through which human vision operates phenomenologically.[^22] To mock or mourn this slippage as slop is to remain largely indifferent to the material conditions that make these images possible, and the political uses to which they are already being put. Roland Meyer has aptly described the visual environment of AI images as “platform realism” — a second-order aesthetic derived from past images, optimised for consumer expectations, and filtered through “white, Western, male, middle-class aesthetic values”.[^23] But once we move away from generalised anxieties about the statistical corruption of visual culture, and study the more specific shifts happening globally around IP, and around the simultaneous production and parodification of spectacle, these AI videos open up a contradiction in existing visual culture that platform realism  — like slopaganda — alone cannot account for.&lt;/p&gt;
&lt;p&gt;Separated by over a decade, Hito Steyerl’s conceptualisation of the “poor image” and the “mean image” were never meant to describe the same thing. On the one hand, the poor image is “a copy in motion”, degraded through piracy and compression, losing resolution as it defies copyright and gains circulation.[^24] On the other hand, mean images are “statistical renderings”, replacing photographic indexicality and political contradiction with stochastic probability.[^25] Arguably, in the case of the counterpropaganda AI videos, these two visual formations have begun to bleed into one another, as the mean image enters the poor image’s circuits of informal distribution, acquiring both its pirate circulation and political charge. As Chinese and Iranian AI war videos circulate through Telegram channels, recompressed and reposted across Big Tech platforms despite bans, the mean image unexpectedly acquires the fugitive quality of the poor image.&lt;/p&gt;
&lt;p&gt;However, in our enthusiasm for this inversion, let’s not mistake the weaponisation of AI video for some kind of revolution, or the parodification of spectacle for the dismantling of the spectacular society altogether. My suggestion, therefore, is also not that AI counterpropaganda videos under asymmetric image warfare should be treated as a grand redemption narrative for hyperscale AI itself. Generative AI remains, by any sober accounting, a net negative — as an instrument of extraction and surveillance that violates Hollywood IP with the same casual indifference with which it exploits precarious data workers, dispossesses artists of their creative labour, and extracts the planet’s resources. However, in an asymmetric conflict where one side is granted impunity despite bombing over a hundred schoolchildren and the other side is condemned for blowing up detested data centres, it would be hypocritical to not contextualise Iran’s AI counterpropaganda as a net positive against the hegemony of existing war spectacle.&lt;/p&gt;
&lt;p&gt;This article’s wager, then, is that AI images — copyright disputes notwithstanding — have the potential to erode the visual monopoly of Hollywood’s military-entertainment complex from within. Past wars in Vietnam, the Gulf, and Iraq produced propaganda that made imperial violence appear necessary and noble. Indeed, future U.S. and Israeli joint productions may well attempt the same for their ongoing war crimes in Palestine, Lebanon, Yemen, and Iran. But their efficacy as spectacle may well be diminished in the long run, their spell broken, when counterpropaganda can be generated at computational speed and negligible cost from a basement studio. Or so one can hope. For now, we witness the curious inversion of Hollywood’s own visual grammar, not through the guerrilla commitments of Third Cinema, but through the repurposing of AI platforms that were never really designed for “countervisuality”.[^26] The dialectic of slop and spectacle, and of pastiche and propaganda, offers no anticolonial guarantees — but necessary openings born of fatigue, and moments of cathartic laughter in the face of asymmetric image wars.&lt;/p&gt;
&lt;p&gt;[^1]:  Fredric Jameson, &lt;em&gt;Postmodernism, or, The Cultural Logic of Late Capitalism&lt;/em&gt;, 1991.&lt;/p&gt;
&lt;p&gt;[^2]:  Guardian News, “Donald Trump shares bizarre AI-generated video of ‘Trump Gaza’”, &lt;em&gt;YouTube&lt;/em&gt;, &lt;a href=&quot;https://www.youtube.com/watch?v=PslOp883rfI&quot;&gt;https://www.youtube.com/watch?v=PslOp883rfI&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;[^3]:  Jameson, &lt;em&gt;Postmodernism, or, The Cultural Logic of Late Capitalism&lt;/em&gt;, p. 17.&lt;/p&gt;
&lt;p&gt;[^4]:  David L. Robb, &lt;em&gt;Operation Hollywood: How the Pentagon Shapes and Censors the Movies&lt;/em&gt;, 2004, p. 37.&lt;/p&gt;
&lt;p&gt;[^5]:  Carl Boggs and Tom Pollard, &lt;em&gt;The Hollywood War Machine&lt;/em&gt;, 2016, p. 1.&lt;/p&gt;
&lt;p&gt;[^6]:  Paul Virilio, &lt;em&gt;War and Cinema: The Logistics of Perception&lt;/em&gt;, 1989.&lt;/p&gt;
&lt;p&gt;[^7]:  Narges Bajoghli, “In the Room with Iran’s Social Media Savants”, &lt;em&gt;New York Magazine&lt;/em&gt;, 7 April 2026.&lt;/p&gt;
&lt;p&gt;[^8]:  FastOrange, “CCTV AI Propaganda Video: White Eagle Alliance vs. the Persian Cats,” &lt;em&gt;YouTube&lt;/em&gt;, https://www.youtube.com/watch?v=5dGY0_pgkv8&lt;/p&gt;
&lt;p&gt;[^9]:  Michał Klincewicz et al., “Slopaganda: The Interaction between Propaganda and Generative AI”, &lt;em&gt;Filosofiska Notiser&lt;/em&gt;, 2025.&lt;/p&gt;
&lt;p&gt;[^10]:  Hayden Field, “Why OpenAI Killed Sora”, &lt;em&gt;The Verge&lt;/em&gt;, 28 March 2026.&lt;/p&gt;
&lt;p&gt;[^11]:  Jianhong Bai et al., “SemanticGen: Video Generation in Semantic Space”, &lt;em&gt;arXiv&lt;/em&gt;, 2025.&lt;/p&gt;
&lt;p&gt;[^12]:  Ning Zhang et al., “The ‘Eastern Data and Western Computing’ Initiative in China Contributes to Its Net-Zero Target”, &lt;em&gt;Engineering&lt;/em&gt;, 2025.&lt;/p&gt;
&lt;p&gt;[^13]:  Dan Milmo and Andrew Pulver, “‘It’s over for us’: release of new AI video generator Seedance 2.0 spooks Hollywood”, &lt;em&gt;The Guardian&lt;/em&gt;, 13 February 2026. For the MPA’s response, see: Gene Maddaus, “Motion Picture Association Pushes ByteDance to Curb Seedance 2.0 AI Infringement”, &lt;em&gt;Variety&lt;/em&gt;, 20 February 2026.&lt;/p&gt;
&lt;p&gt;[^14]:  Dhruv Rathee, “Reality of Dhurandhar Film”, &lt;em&gt;YouTube&lt;/em&gt;, &lt;a href=&quot;https://www.youtube.com/watch?v=wWIJNCU8OOs&quot;&gt;https://www.youtube.com/watch?v=wWIJNCU8OOs&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;[^15]:  Kyle Chayka, “The Team Behind a Pro-Iran, Lego-Themed Viral-Video Campaign”, &lt;em&gt;The New Yorker&lt;/em&gt;, 2 April 2026.&lt;/p&gt;
&lt;p&gt;[^16]:  The Independent, “Iran State Media Share Lego Propaganda Video,” &lt;em&gt;YouTube&lt;/em&gt;, https://www.youtube.com/watch?v=wo7e2OjyEBo&lt;/p&gt;
&lt;p&gt;[^17]:  Jameson, &lt;em&gt;Postmodernism, or, The Cultural Logic of Late Capitalism&lt;/em&gt;, p. 17.&lt;/p&gt;
&lt;p&gt;[^18]:  Virilio, &lt;em&gt;War and Cinema&lt;/em&gt;, p. 41.&lt;/p&gt;
&lt;p&gt;[^19]:  Virilio, &lt;em&gt;War and Cinema&lt;/em&gt;, p. 25.&lt;/p&gt;
&lt;p&gt;[^20]:  Vandahood Live, “IRAN WAR - The Movie,” &lt;em&gt;YouTube&lt;/em&gt;, https://www.youtube.com/watch?v=FDeBbzaj8oA&lt;/p&gt;
&lt;p&gt;[^21]:  Richard Dyer, &lt;em&gt;Stars&lt;/em&gt;, 1979, p. 34.&lt;/p&gt;
&lt;p&gt;[^22]:  Shane Denson, &lt;em&gt;Discorrelated Images&lt;/em&gt;, 2020.&lt;/p&gt;
&lt;p&gt;[^23]:  Roland Meyer, “Platform Realism: AI Image Synthesis and the Rise of Generic Visual Content”, &lt;em&gt;Transbordeur: Photographie histoire société&lt;/em&gt; 9, 2025, p. 17.&lt;/p&gt;
&lt;p&gt;[^24]:  Hito Steyerl, “In Defense of the Poor Image”, &lt;em&gt;e-flux&lt;/em&gt;, 2009.&lt;/p&gt;
&lt;p&gt;[^25]:  Hito Steyerl, “Mean Images”, &lt;em&gt;New Left Review&lt;/em&gt;, March–June 2023.&lt;/p&gt;
&lt;p&gt;[^26]:  Nicholas Mirzoeff, &lt;em&gt;The Right to Look: A Counterhistory of Visuality&lt;/em&gt;, 2011.&lt;/p&gt;
</content:encoded></item><item><title>The Prospect of Butlerian Jihad</title><link>https://disjunctionsmag.com/articles/prospect-butlerian-jihad</link><guid isPermaLink="true">https://disjunctionsmag.com/articles/prospect-butlerian-jihad</guid><description>Responding to anti-tech structures of feeling</description><pubDate>Mon, 30 Mar 2026 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;&lt;em&gt;Our opinion is that war to the death should be instantly proclaimed against them. Every machine of every sort should be destroyed by the well-wisher of his species. Let there be no exceptions made, no quarter shown; let us at once go back to the primeval condition of the race. If it be urged that this is impossible under the present condition of human affairs, this at once proves that the mischief is already done, that our servitude has commenced in good earnest, that we have raised a race of beings whom it is beyond our power to destroy, and that we are not only enslaved but are absolutely acquiescent in our bondage.&lt;/em&gt;
— Samuel Butler, &lt;em&gt;Darwin Among the Machines&lt;/em&gt; (1863)

&lt;em&gt;Then came the Butlerian Jihad — two generations of chaos. The god of machine-logic was overthrown among the masses and a new concept was raised: &quot;Man may not be replaced.&quot;&lt;/em&gt;
— Frank Herbert, &lt;em&gt;Dune&lt;/em&gt; (1965)
&lt;/p&gt;
&lt;p&gt;In the ecology of global capitalism — or at least in the Western hemisphere — American technological capital is ascendant. In the U.S. and in Europe, inflationary economies languishing without growth are increasingly looking to these tech firms to boost GDP, and are happily making deals and concessions to achieve this.&lt;/p&gt;
&lt;p&gt;Strangely, though, public perception of these companies does not appear to match official attitudes. Most of all, in the places where policy circles pin the most hope upon artificial intelligence and other emergent technologies — in North America, in the UK and in Europe — public perception is at its most critical. Recent polling by Ipsos has, for instance, found that people in the Anglosphere are the most likely to be nervous and least likely to be excited about AI, followed by people in Europe — with those living in South America and Asia more likely to be positive about AI.[^1] Similar findings have been reached by Pew Research, who find that 50% of Americans are more concerned than excited about AI, and only 10% more excited than concerned.[^2] Similarly, KPMG has found that those living in Nigeria, India and China are the most likely to view AI as trustworthy, with Anglophone and European countries least likely.[^3]&lt;/p&gt;
&lt;p&gt;This perception has also been reflected in pop culture. For instance, controversy has sprung up in recent months over the video game &lt;em&gt;Clair Obscur: Expedition 33&lt;/em&gt;, after it was discovered that generative AI had been used to make placeholder assets for the game, some of which made it into the initial release. The Indie Game Awards, which had awarded the game Game of the Year and Best Debut Game, rescinded its awards, and the developers hurried to explain that their use was minimal, and to assure players that in future games, &quot;everything will be made by humans.”[^4] More recently, &lt;em&gt;Bandcamp&lt;/em&gt; has also banned music made partially or in full by generative AI from being distributed on its platform — accompanied by a blog post titled “Keeping Bandcamp Human”.[^5] In recent years, actors in the SAG-AFTRA union have gone on strike, explicitly demanding limitations to the use of AI by their employers.[^6]&lt;/p&gt;
&lt;p&gt;In these statistics and anecdotes, we can see a turn of sentiment against Big Tech and the direction it has set for the cultural economy in particular. This sentiment might be characterised as what cultural theorist Raymond Williams called a &lt;em&gt;structure of feeling&lt;/em&gt; — a widespread affective condition that manifests through experience and culture, but remains essentially pre-political and fragmentary.[^7] It might include, for instance: a view that technical products and services are becoming worse, a process that Cory Doctorow has appropriately called &lt;em&gt;enshittification&lt;/em&gt;; the breaking out of disputes over automation at work; anxieties about automation in the economy; anxieties or distaste for surveillance at work and in public; negative associations with prominent individuals like Elon Musk or Peter Thiel; distaste or distrust for Silicon Valley’s science-fiction inflected vision of the future; a perceived gap between the hype surrounding technology and its actual utility; a related view that apparent innovations are not innovations at all, but snake oil: scams, that is, designed to earn quick profits via deception.[^8]&lt;/p&gt;
&lt;p&gt;The question of whether this structure of feeling will be actualised as politics, and of what kind of politics it will become if it is, remains open. Union organising around the issue is growing, but remains largely economistic — concerned with the specific terms and conditions of members exposed to automation, that is — rather than political. To date, the current that has come closest to voicing this politics has been the resurgence of interest in the Luddites: the 19th-century English textile workers who smashed the mechanical looms that were making them redundant. Self-described neo-Luddites argue we should take inspiration from the Luddites and launch a popular revolt against new automation technologies.[^9] In recent years, Brian Merchant — author of &lt;em&gt;Blood in the Machine&lt;/em&gt; (2023), a popular history of the Luddites — has organised a series of interactive “tribunals”, in which technologies are put on trial and smashed on stage if the panel finds them wanting.[^10] While Merchant is always careful to frame Luddism as a considered rebellion against the exploitative technologies of big tech, the neo-Luddites, here and elsewhere, tend to direct their anger towards technical objects themselves.&lt;/p&gt;
&lt;p&gt;It is not yet clear how new technical infrastructures, such as hyperscale data centres, will structurally reform our economy and society, and what impact this will have on the availability and forms of political action. In the commercial press at least, the idea of techno- or neo-feudalism has become an increasingly popular framework for thinking this shift through.[^11] While versions of this thesis differ, they share the general proposition that new forms of technology are driving us out of capitalism, and that this movement is taking on the form of a regression towards feudal social relations (or at least something similar), rather than a progression to something totally new. Under techno-feudalism, capitalists are being displaced by lords who leverage their ownership of infrastructures to extract value from disempowered consumers and workers, themselves now closer to serfs or peasants than a conventional proletariat.&lt;/p&gt;
&lt;p&gt;Techno-feudal theses tend to fall apart under close scrutiny: as Evgeny Morozov has argued, for instance, the forms of dispossession and expropriation contemporary writers often associate with “feudalism” have frequently existed in the history of capitalism, especially beyond the imperial core, and the structural position of the user today is quite different to that of the feudal serf.[^12] Most critics will concede, however, that techno-feudalists do point to real and substantial shifts in the technological and economic basis of capitalism. Jeremy Gilbert has argued compellingly that this constitutes not the end of capitalism as such, but the end of neoliberal capitalism and the start of a new regime of production, which he calls, quoting Nick Srnicek, &lt;em&gt;platform capitalism&lt;/em&gt;.[^13]&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;Here, I am less interested in entering the debate about techno-feudalism’s theoretical merits, which has been somewhat exhausted, and more interested in the particular constellation of attitudes towards technology floating around in conversations such as those about techno-feudalism and Luddism. One could add the debates between so-called eco-modernist and degrowth Marxists to this constellation — in which dividing lines have unhelpfully been drawn between a hapless embrace of capitalist growth on the one hand and a rejection of technology on the other.[^14] I am speaking here not just of academic debates or of commercial literature, but of podcasts, blogs, online discourse and everyday conversations — of the hard-to-pin-down space of affect and sentiment, of the aforementioned structure of feeling.&lt;/p&gt;
&lt;p&gt;I’m sympathetic to these positions, and in particular to the attention they bring to the malign influence of technological capital today. To the extent that neo-Luddites bring critical attention to technology, they are doing useful work. But this anti-tech sentiment frequently cohabitates with something uneasy: the treatment of technology as some abstract and impenetrable evil, and the retreat, against this, into essentialist views of the human. In such a move, there is a danger of falling into fantastical thinking, of rallying to defend a romantic view of humanity from the corruption of machines. For reasons I will clarify shortly, this would be fundamentally misguided and serve only to distract us from the actual problem — not technology, but capitalism itself.&lt;/p&gt;
&lt;p&gt;This brings me to Frank Herbert’s novel &lt;em&gt;Dune&lt;/em&gt; (1965) and its sequels, the ostensible subjects of this article. &lt;em&gt;Dune&lt;/em&gt; is premised on a technological fable — of humanity’s destruction of machines — and offers some parallels to the ideas of both the neo-Luddites and the Techno-feudalists, as well as to a third not yet mentioned narrative: Silicon Valley’s own self-mythology — the fight for or against civilisation ending Artificial General Intelligence (AGI). Thinking through this aspect of &lt;em&gt;Dune&lt;/em&gt; can help to clarify the possibilities and limitations of the anti-tech structure of feeling.&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;Among those science fiction novels (and other media) that have captured the imagination of Silicon Valley, &lt;em&gt;Dune&lt;/em&gt; is a distinct and curious example. It does not offer a critique of contemporary consumer capitalism — as in &lt;em&gt;Neuromancer&lt;/em&gt; (1984) or &lt;em&gt;Snow Crash&lt;/em&gt; (1992) — or a vision into a utopian post-capitalist society, as in Star Trek; instead, it depicts a future in which the economic and social systems of the past prevail — in which the future has unfolded as regression.&lt;/p&gt;
&lt;p&gt;Herbert’s original six novel cycle — not to mention the numerous titles subsequently written by his son — lays out a sprawling historical epic that unfolds through the political machinations of feudal houses. Technologically, the world of &lt;em&gt;Dune&lt;/em&gt; may be more advanced than our own, but the forms of technology described in it are so distinct and alien as to more closely resemble the magic of fantasy. In these things, &lt;em&gt;Dune&lt;/em&gt; is a classic example of “space opera”.[^15]&lt;/p&gt;
&lt;p&gt;More than other forms of science fiction, space opera, like fantasy, is interested in world-building, in expanding and experimenting with its cosmos, and with history. This, more than his dense prose, is where Herbert excels as a writer. &lt;em&gt;Dune&lt;/em&gt; appeals to its reader through maps and appendices as much as in narrative, while its epochal timescales allow generations, ecology and even geology to operate as narrative concerns. &lt;em&gt;Dune&lt;/em&gt;’s past-future, in which capitalism has long since been abandoned for feudalism, is, as such, built on a narrative conceit from the novels’ distant past and their readers’ not-so-far future: the “Butlerian Jihad”, or a revolutionary rejection of “thinking machines” (i.e. mechanical or electronic computers).[^16]&lt;/p&gt;
&lt;p&gt;This is a millenarian moment in the world of &lt;em&gt;Dune&lt;/em&gt;, in which its history departed from our future. The motivations for the Butlerian Jihad are ambiguous: it is stated that humans had been made marginal by thinking machines that out-developed them and therefore subjugated them, but also implied that it was a counter-revolution against emerging technical classes.[^17] Either way, the Butlerian Jihad became a bloody, sustained conflict, out of which the feudal Imperium of the novels emerged. In place of thinking machines, eugenics and chemical intervention allowed for the creation of mentats, or human computers capable of advanced mathematics; navigators, who could steer space ships through interstellar travel; and secretive groups such as the &lt;em&gt;Bene Gesserits&lt;/em&gt;, that influence the religion and politics of &lt;em&gt;Dune&lt;/em&gt;’s cosmos, and so on. What matters here, though, is that the Butlerian Jihad serves as a sort of genesis myth within &lt;em&gt;Dune&lt;/em&gt;, one that creates the necessary conditions for a return or continuation of feudalism into the far future.&lt;/p&gt;
&lt;p&gt;This basic conceit — a reversion to feudalism via the rejection of technology — makes &lt;em&gt;Dune&lt;/em&gt;’s Butlerian Jihad a provocative counterpart to the technofeudal thesis. Like a warped funhouse mirror, feudalism returns not through the escalation of technological exploitation, as in technofeudalism, but instead through its rejection. In a historical conjuncture defined by the ascendence of technological capital, the notion of a Butlerian Jihad has become newly compelling. It resonates with the exact structure of feeling I described at the outset of this article, against the ascendance of AI and American tech magnates like Musk and Thiel. Indeed, alongside Luddism, the idea of Butlerian Jihad has gained some traction in the past few years, inspiring a growing number of blogs, academic papers and opinion pieces advocating for a Butlerian Jihad against AI.[^18]&lt;/p&gt;
&lt;p&gt;But Herbert’s relevance is primarily in the fact that, as a reactionary advocate of revolution against machines, he can draw out those aspects of the conceptual space of anti-tech sentiment which are most uneasy, and most dangerous.&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;Herbert’s Butlerian Jihad may be fictional, but its eponymous originator — Samuel Butler — is not. Butler was a 19th-century English novelist, best known for his utopian satire &lt;em&gt;Erewhon&lt;/em&gt; (1872), in which the citizens of Erewhon destroy and outlaw machines fearing subjugation by superior mechanical consciousness. Butler adapted this aspect of his novel directly from a series of letters he published a decade earlier in the New Zealand newspaper, &lt;em&gt;The Press&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;The first of these letters, “Darwin Among the Machines”, outlines Butler’s theory of technology:&lt;/p&gt;
&lt;p&gt;&lt;em&gt;We refer to the question: What sort of creature man’s next successor in the supremacy of the earth is likely to be. We have often heard this debated; but it appears to us that we are ourselves creating our own successors; we are daily adding to the beauty and delicacy of their physical organisation; we are daily giving them greater power and supplying by all sorts of ingenious contrivances that self-regulating, self-acting power which will be to them what intellect has been to the human race.&lt;/em&gt;[^19]&lt;/p&gt;
&lt;p&gt;Butler was heavily inspired by Charles Darwin’s then-recent findings on evolution.[^20] Like many contemporaries, Darwin’s discoveries led Butler to anxiety about humanity’s place on earth, which could no longer be viewed as exceptional — at least not in theological terms.[^21] Butler’s proposition was that, having reached supremacy in the sphere of biological evolution, humans had instigated a process of technological evolution in which they would inevitably be exceeded, even domesticated. Humanity would cease to be the master of the world, and become — like animals, like nature — “bound down as slaves”.&lt;/p&gt;
&lt;p&gt;The parallels between Butler’s anxieties and Silicon Valley’s AGI millenarianism — its suggestion that machine intelligence might exceed that of humans, and that this might be an apocalyptic scenario — are striking, and not coincidental.[^22] Butler is cited, for instance, by Alan Turing, in a short essay in which he discussed the possibility of future intelligent machines.[^23] The idea of autonomous machine evolution reappears in the mid-20th century in John von Neumann’s writing on self-replicating automata, which became the basis of theories of a technological singularity, a limit point beyond which machine development surpasses human capability and control.[^24] These references fueled the imaginations of generations of science-fiction authors: Isaac Asimov’s &lt;em&gt;I, Robot&lt;/em&gt; (1950); Arthur C. Clarke’s &lt;em&gt;2001: A Space Odyssey&lt;/em&gt; (1968); and Phillip K. Dick’s &lt;em&gt;Do Androids Dream of Electric Sheep?&lt;/em&gt; (1968), among many others. As large language models have got better at reproducing human communication, it is this science fiction which has guided the understanding of Silicon Valley CEOs and thought leaders. One consequence is the emergence of a global “AI safety” industry, channelling state and philanthropic resources into researching “AI risks”, up to and including human extinction.[^25]  If the development of AI is “left unchecked”, one philanthropy-aligned campaign suggests, “it will become increasingly difficult to exert meaningful control in the coming years.”[^26] And if these declarations do sometimes acknowledge economic transformation and cultural shifts, such concerns always come second to science-fiction-inflected paranoia about a loss of control, expressed in remarkably similar terms to those used by Butler. It has become cliched to point out that the word &lt;em&gt;robot&lt;/em&gt; derives from the Czech word &lt;em&gt;robota&lt;/em&gt;, meaning “forced labour” — but knowing what we do about the relationship between big tech and its workers, it is perhaps unsurprising they are also concerned that their AI might go on strike.[^27]&lt;/p&gt;
&lt;p&gt;Butler’s logic, as that of many who follow him, is explicitly supremacist. It describes the current position of humanity as one of &quot;supremacy of the earth”, and suggests that losing this would necessarily mean subjugation by another. It is not at all coincidental that he wrote his essay from within the British Colony of New Zealand; indeed, one is left to wonder who he would have included within his view of “man”.[^28] Post- and decolonial writers have demonstrated the extent to which natives were excluded from the category of “the human” in the colonial situation, and the extent to which this was used to justify the dominion of European settlers over natives. Butler’s theory of technology follows on from a view that the subjugation of the world — and of other humans with it — is not only morally justifiable but a necessary good. And by positioning all relationships in terms of control and domination, it denies the actual interdependence and complex forms of agency between humans, technology and non-human nature.&lt;/p&gt;
&lt;p&gt;This is not to suggest that self-described neo-Luddites are tacitly endorsing the British Empire or colonialism. But several of Butler’s assumptions do appear to have become popular among both AGI doomers and neo-Luddites: that technology is something apart from humanity; that technology has begun to corrupt a pure or romantic vision of the human; that the destruction of technology would entail liberation from exploitation and allow for the full flourishing of the human.&lt;/p&gt;
&lt;p&gt;A more useful theory of technology has been offered by Bernard Stiegler, who suggests that human subjectivity does not exist apart from or before technology, but has in fact always been completed by it — what he calls the “originary technicity” of the human.[^29] For Stiegler, the formation of human subjectivity cannot be reduced to an individual genesis at birth: “A newborn child arrives into a world in which tertiary retention [technological memory] both precedes and awaits it, and which, precisely, constitutes this world as world.”[^30] Subjectivity, in other words, comes from outside the body as well as within it. And, for this reason, the human is not an immutable, &lt;em&gt;a priori&lt;/em&gt; thing, but is subject to a high degree of historical and technological contingency. Shifts in technologies — especially those of memory, perception and communication — entail novel humans who experience being in qualitatively different terms.&lt;/p&gt;
&lt;p&gt;This conception of technology muddies the idea of the human, opening it up to historical development.[^31] If “humanity” is not a thing-in-itself, but historically, socially and technically mutable, then the sphere of possibility of the human and of our world becomes much broader. Our relationship to the non-human — to technology or to nature — does not need to be one of control, domination and exploitation. In fact, the understanding of it as such is highly specific to the logic of capitalist and colonial exploitation and extraction. Both Butler’s and Silicon Valley’s fear of being dominated or controlled by machines is itself downstream of attempts to dominate and control the non-human world. Yet, since we rely on the non-human world for our continued existence, this goal is one which can never be achieved — and which inevitably leads to violent paranoia.&lt;/p&gt;
&lt;p&gt;Returning to &lt;em&gt;Dune&lt;/em&gt;, it is interesting to note that the Butlerian Jihad is not a revolt against exploitation as such — since &lt;em&gt;Dune&lt;/em&gt;’s world is drenched in fantastical exploitation — but a defence of a human monopoly on exploitation; or more precisely, of specific classes of humans’ monopoly on exploitation. We should ask: were it possible, what would come after our own Butlerian Jihad? Would it be a more democratic, more redistributive, more caring society in which we can all flourish? Would it be closer to the world of &lt;em&gt;Dune&lt;/em&gt;, in which the strict hierarchies of the distant past return? Most likely it would simply mean a return to more of the human, face-to-face forms of exploitation that prevailed in previous decades.&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;Even if we don’t need to worry about feudalism, we do still need to worry about capitalism, which is currently taking on novel forms.[^32] The most plausible situation is this: we are moving into a new regime of production, led by the wing of capital invested in technological development. This wing is now busy building infrastructures and weaving its way into the state, the military and much of the economy. The extent and reach of computing into all spheres of the economy, communication and everyday life are unprecedented; they are being arranged in radically new ways; new forms of automation are being trialled, as are new techniques for deriving profits from human activity.&lt;/p&gt;
&lt;p&gt;As calls for a fight back against technology grow, the left needs to carefully consider what it is advocating for. Are we fighting the exploitation of workers, the hollowing out of culture and the destruction of the earth via technology, or are we rallying in defence of false visions of pure, a-technical humanity? The former will be necessary, but the latter is an ontologically confused dead-end. The fight against technology as such will do little to resolve the fundamental problem of exploitation, since this originates in a human willingness to exploit — not an individual moral willingness, but an economic propensity embodied in technology as well as in social relations.&lt;/p&gt;
&lt;p&gt;This emerging regime of production demands clear and concerted attention to emerging technologies: a hard-nosed digital materialism that banishes any magical thinking and focuses on the actual dangers and possibilities of our present. This should include, where possible, the support and development of alternative technological spaces — the free and open — including those which do not yet hold a radical conception of themselves.&lt;/p&gt;
&lt;p&gt;British Cultural Studies has tended to take a suspicious view of close attention to technology. In Raymond Williams’ classic study &lt;em&gt;Television: Technology and Cultural Form&lt;/em&gt;, for instance, he argued that technology is “looked for and developed [by capital] with certain purposes and practices already in mind”, and therefore narrowly aligned to the interests of capital rather than to some autonomous sphere of progress.[^33] In very similar terms, artist and neo-luddite Molly Crabapple has claimed in an interview with the &lt;em&gt;Guardian&lt;/em&gt; that &quot;technological development is shaped by money, it’s shaped by power, and it’s generally targeted towards the interests of those in power as opposed to the interests of those without it.&quot;[^34] Both Williams and Crabapple are right to suggest that technological development is not autonomous, but is, in fact, steered by interests and investments. It does not, however, follow from this that capital has total command over its technologies, or that they are useless to the left. It certainly does not follow that understanding the mechanisms of technology is irrelevant to effectively combating Big Tech.&lt;/p&gt;
&lt;p&gt;The anti-tech structure of feeling is there for the taking. But if it is to lead anywhere, it must be taken carefully: a fightback against technological exploitation will be found not in the complete rejection of technology, but in the short-circuiting of one kind of technology and the development of another. The key fights of the coming years will not be fought between humanity and machines, but between capital and whatever social coalition can form against it. Technology will be a key terrain in this conflict: one which, if we give up, we will already have lost.&lt;/p&gt;
&lt;p&gt;[^1]:  &lt;em&gt;The IPSOS AI Monitor&lt;/em&gt;, 2025.&lt;/p&gt;
&lt;p&gt;[^2]:  Pew Research Center, &lt;em&gt;How People Around the World View AI&lt;/em&gt;, 2025.&lt;/p&gt;
&lt;p&gt;[^3]:  KPMG, &lt;em&gt;Trust attitudes and the use of artificial intelligence&lt;/em&gt;, 2025.&lt;/p&gt;
&lt;p&gt;[^4]:  Patricia Hernandez, “After GOTY Pull, Clair Obscur devs draw line in sand over AI”, &lt;em&gt;Polygon&lt;/em&gt;, 24 December 2025*.*&lt;/p&gt;
&lt;p&gt;[^5]:  Bandcamp, “Keeping Bandcamp Human”, 13 January 2026.&lt;/p&gt;
&lt;p&gt;[^6]:  SAG-AFTRA, “SAG-AFTRA A.I. Bargaining And Policy Work Timeline”; SAG-AFTRA, “SAG-AFTRA Strikes Video Games over AI”, 16 August 2024.&lt;/p&gt;
&lt;p&gt;[^7]:  Though he used the term earlier, Williams first theorised structures of feeling in &lt;em&gt;The Long Revolution&lt;/em&gt; (p.48).&lt;/p&gt;
&lt;p&gt;[^8]:  Cory Doctorow, “The ‘Enshittification’ of TikTok”, &lt;em&gt;Wired&lt;/em&gt;, 23 January 2023.&lt;/p&gt;
&lt;p&gt;[^9]:  Brian Merchant, “I’ve always loved tech. Now, I’m a Luddite. You should be one, too.”, &lt;em&gt;The Washington Post&lt;/em&gt;, 18 September 2023.&lt;/p&gt;
&lt;p&gt;[^10]:  Sheelah Kolhatkar, “Revenge of the Luddites!”, &lt;em&gt;The New Yorker&lt;/em&gt;, 23 October 2023.&lt;/p&gt;
&lt;p&gt;[^11]:  Yanis Varoufakis and Cédric Durand have both called this &lt;em&gt;technofeudalism&lt;/em&gt;; Jodi Dean calls it &lt;em&gt;neofeudalism&lt;/em&gt;, and Mariana Mazzucato &lt;em&gt;digital feudalism&lt;/em&gt;. See: Yanis Varoufakis, &lt;em&gt;Technofeudalism: What Killed Capitalism&lt;/em&gt;, 2023; Cedric Durand, &lt;em&gt;How Silicon Valley Unleashed Techno-feudalism: The Making of the Digital Economy&lt;/em&gt;, 2024; Jodi Dean, &lt;em&gt;Capital’s Grave: Neofeudalism and the New Class Struggle&lt;/em&gt;, 2025; Mariana Mazzucato, “Preventing Digital Feudalism”, &lt;em&gt;Social Europe&lt;/em&gt;, 2019.&lt;/p&gt;
&lt;p&gt;[^12]:  Evgeny Morozov, “Critique of Techno-Feudal Reason”, &lt;em&gt;New Left Review,&lt;/em&gt; Jan–April 2022.&lt;/p&gt;
&lt;p&gt;[^13]:  Jeremy Gilbert, “Techno-feudalism or platform capitalism? Conceptualising the digital society”, &lt;em&gt;European Journal of Social Theory&lt;/em&gt;, 2024.&lt;/p&gt;
&lt;p&gt;[^14]:  For an ecomodernist perspective, see: Leigh Phillips, “Degrowth Is Not the Answer to Climate Change”, &lt;em&gt;Jacobin&lt;/em&gt;, 1 August 2023; for a degrowth perspective, see: Kohei Saito, &lt;em&gt;Marx in the Anthropocene: Towards the Idea of Degrowth Communism&lt;/em&gt;, 2023. For a post-mortem on the debate, see: Kai Heron, “Forget Eco-Modernism”, &lt;em&gt;Verso Blog&lt;/em&gt;, 2 April 2024.&lt;/p&gt;
&lt;p&gt;[^15]:  The &lt;em&gt;Star Wars&lt;/em&gt; film franchise, the most commercially successful space opera, borrows from it extensively, though has inverted its past-future into a future-past — &lt;em&gt;a long time ago, in a galaxy far far away&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;[^16]:  Frank Herbert, &lt;em&gt;Dune&lt;/em&gt;, 1965.&lt;/p&gt;
&lt;p&gt;[^17]:  For the Butlerian Jihad presented as class struggle between an aristocratic and technical class, see: Frank Herbert, &lt;em&gt;Children of Dune&lt;/em&gt;, 2008 (p.126).&lt;/p&gt;
&lt;p&gt;[^18]:  See, e.g.:  Michael Cuenco, “We Must Declare Jihad Against A.I.”, &lt;em&gt;Compact&lt;/em&gt;, 28 April 2023;  Megan McArdle, “Banning AI saved humanity in ‘Dune.’ So why can’t this work for us?”, &lt;em&gt;The Washington Post&lt;/em&gt;, 11 May 2023; Edward Ongweso Jr., “On the Origins of Dune’s Butlerian Jihad”, &lt;em&gt;The Tech Bubble&lt;/em&gt;, 19 September 2025; Syed Mustafa Ali, “A Butlerian Hauntology”, &lt;em&gt;ReOrient&lt;/em&gt;, 2025; Albert Burneko, “Butlerian Jihad Now”, &lt;em&gt;Defector&lt;/em&gt;, 2025.&lt;/p&gt;
&lt;p&gt;[^19]:  Samuel Butler, “Darwin Among the Machines”, &lt;em&gt;The Press&lt;/em&gt;, 13 June 1863.&lt;/p&gt;
&lt;p&gt;[^20]:  Darwin’s &lt;em&gt;On the Origin of Species&lt;/em&gt; was published just five years earlier.&lt;/p&gt;
&lt;p&gt;[^21]:  See, e.g.: George Levine, &lt;em&gt;Darwin and the Novelists: Patterns of Science in Victorian Fiction&lt;/em&gt;, 1988.&lt;/p&gt;
&lt;p&gt;[^22]:  Ilya Sutskever, co-founder and chief scientist at OpenAI, is reported to have claimed that they are “definitely going to build a bunker before [they] release AGI&quot;. See: Zoe Kleinman, “Tech billionaires seem to be prepping. Should we all be worried?”, &lt;em&gt;BBC News&lt;/em&gt;, 10 October 2025.&lt;/p&gt;
&lt;p&gt;[^23]:  Turing typically treats these machines with fondness and not concern. See: Alan Turing, “Intelligent Machines, A Heretical Theory”, 1951.&lt;/p&gt;
&lt;p&gt;[^24]:  John von Neumann, &lt;em&gt;Theory of Self-Reproducing Automata&lt;/em&gt;, 1966.&lt;/p&gt;
&lt;p&gt;[^25]:  Center for AI Safety, “Statement on AI Risk”, 2023.&lt;/p&gt;
&lt;p&gt;[^26]:  AI Red Lines, “We urgently call for international red lines to prevent unacceptable AI risks”, 2025.&lt;/p&gt;
&lt;p&gt;[^27]:  Generally attributed to Karel Čapek’s science fiction play, &lt;em&gt;Rossum’s Universal Robots&lt;/em&gt;, 1920.&lt;/p&gt;
&lt;p&gt;[^28]:  See, for instance: Frantz Fanon, &lt;em&gt;The Wretched of the Earth&lt;/em&gt;, 1963 or Walter Mignolo and Catherine Walsh, &lt;em&gt;On Decoloniality: Concepts, Analytics, Practice&lt;/em&gt;, 2018.&lt;/p&gt;
&lt;p&gt;[^29]:  This is the titular “fault” of Epimetheus — a lack that must be completed by technology — from Steigler’s best known text, &lt;em&gt;Technics and Time, 1: The Fault of Epimetheus&lt;/em&gt;, 1994. See also: Katherine Hayles, &lt;em&gt;How We Became Post-Human&lt;/em&gt;, 1999.&lt;/p&gt;
&lt;p&gt;[^30]:  Bernard Stiegler, &lt;em&gt;For a New Critique of Political Economy&lt;/em&gt;, 2010 (p.9).&lt;/p&gt;
&lt;p&gt;[^31]:  Hence, when Donna Harraway argued “I would rather be a cyborg than a goddess”, she was making a claim for historical agency and against essentialist notions of gender; Donna Harraway, “A Cyborg Manifesto: Science, Technology and Socialist-Feminism”, &lt;em&gt;Socialist Review&lt;/em&gt;, 1985.&lt;/p&gt;
&lt;p&gt;[^32]:  See: Editorial, “The Technology Question Today”, &lt;em&gt;Disjunctions&lt;/em&gt;, 2025.&lt;/p&gt;
&lt;p&gt;[^33]:  Raymond Williams, &lt;em&gt;Television: Technology and Cultural Form&lt;/em&gt;, 1975 (p.14).&lt;/p&gt;
&lt;p&gt;[^34]:  Tom Lamont, “‘Humanity’s remaining timeline? It looks more like five years than 50’: meet the neo-luddites warning of an AI apocalypse”, &lt;em&gt;The Guardian&lt;/em&gt;, 2024.&lt;/p&gt;
</content:encoded></item><item><title>The Ends of AI</title><link>https://disjunctionsmag.com/articles/ends-of-ai</link><guid isPermaLink="true">https://disjunctionsmag.com/articles/ends-of-ai</guid><description>Sycophancy and psychosis</description><pubDate>Tue, 17 Mar 2026 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;Since 2022, we have been thrust into relationships with chatbots via companies like ChatGPT (OpenAI), Gemini (Google), Claude (Anthropic), Copilot (Microsoft) and Grok (xAI). And while chatbots are already a very popular use of generative AI, we are told by company spokespeople — the Oligarchs — to brace ourselves for AI’s &lt;em&gt;true&lt;/em&gt; potential (sentience!) and its much bigger impact on the world (sex robots?), once their data centres are fully built and plugged into endless energy sources.&lt;/p&gt;
&lt;p&gt;Years in, some of us are growing more than a little bit tired of the industry’s self-aggrandising promotional discourse. A lot of AI has already funnelled itself into the most predictable of markets, like autonomous weapons, surveillance, dynamic advertising, and deepfake porn. So, on the one hand, as critical scholars, we see clearly that the end of AI is already here, in terms of providing societal benefits worthy of the infrastructural investments being made, while on the other hand, we note that the hugely destructive and deceptive deployment of generative AI nevertheless continues to inflate the world’s economy in typical ways, but at scales never before seen and stakes never before imagined. It is therefore not so much that AI has no potential for good, but rather that it is hard to buy into anything that has to sell itself so hard (and is so far making life worse).&lt;/p&gt;
&lt;p&gt;There are a lot of people who use general-purpose generative AI — be it as a search engine that provides easy summaries, or for transcription purposes, for research, to generate synthetic homework, for “vibe coding”, as a stand-in for friend, therapist or doctor… or to make “decline porn”,[^1] or profile protesters, or surveil workers. Whatever the case — misdirected or outright violent — there’s no denying that LLM-based chatbots are in use and that humans are generally compelled by the format of privately texting with an all-knowing bot.[^2] We are told to think of these conversations as tapping into the collective consciousness of all past knowledge — a repository of everything ever recorded. As Anthropic CEO Dario Amodei sells it, “every AI cluster will have the brainpower of 50 million Nobel Prize winners.”[^3]&lt;/p&gt;
&lt;p&gt;As widely noted, however, the chatbot’s most notable feature is not its “brainpower” but rather how desperately sycophantic it is by default. This means that it feels like you are interacting with a chatbot that really “gets you” and makes you feel good and understood.[^4] Chatbots are programmed to never truly challenge the prompter’s ideas, to such a degree that it can cause people to depart from consensus reality.[^5] This has been referred to as “AI psychosis” — the feeling that you are better understood by the chatbot than by people, or by the belief that “AI” is more objective and neutral than experts, journalists, or your neighbours. The feeling of AI psychosis is also being convinced that you are, in fact, superior to others; an unrecognised genius, or special in some ways that people around you simply don’t recognise, but that the chatbot does. This is what happens when you sell AI as god-like — people might actually end up believing that they are having spiritual awakenings.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Psychosis&lt;/em&gt; might feel like too serious a diagnosis to attach to the use of something virtual like AI chatbots, even though the consequences are, in fact, often rather serious — especially when they lead to suicide, or to harming others.[^6] But, such a framing is also a way of (dis)placing the phenomena onto individual users, as having personal mental health issues. This is important to consider because one of the biggest shifts online in the past two decades has been towards profile-driven identities on social media, where “influencers” (loosely defined) are perceived to build themselves as brands, as entrepreneurs of their own marketable lives. This seems especially important here in the context of the &lt;em&gt;manosphere&lt;/em&gt;, the online realm where men attempt to understand masculinity, often in reaction to feminism. The manosphere has promoted a subculture that mimics the algorithm by counting and measuring everything — from the space between pupils to precise “macros” count to weighted protein intake to the number of followers on TikTok. It measures, categorises, and ranks everything and everyone, and forces a comparison. It forces categories and rates unmeasurable things by standards entirely made up by the subculture, but presented as value-neutral or, in some undisclosed way, scientific. Arguably, much of the logic of generative AI shares in this manosphere logic, which is to blur the manufacturing of objectivity and fabrication of neutrality.&lt;/p&gt;
&lt;p&gt;For the Oligarchs, owners of the data centres that hold and host social media and generative AI alike, this has been an opportunity to inculcate in users the idea of AI chatbots as an extension of social media, whereby you, the individual user, are at the centre of everything. Part of the long arc of the Oligarchs’ political project has been exactly this: to individuate — to make your relationship be with the platform itself; these days, via an unwitting stream-of-consciousness testimonial that the user engages in with a chatbot. This means that the chatbot exchange that seems so private and intimate, between you and an all-knowing bot, is actually you in conversation with the tech company providing the service, who dutifully maintain logs and records of everything you share on their platforms. Users become complicit in their own surveillance: AI as panopticon, a work of architecture “that allows a watchman to observe occupants without the occupants knowing whether or not they are being watched.”[^7]&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;The Oligarchs have long primed users for this iteration of AI by way of social media conditioning. AI could have been many things, but its manifestation — as chatbots and platforms that construct informational totalities — grows from having users build personal accounts, drive engagement from them, and lock into online worlds. Online, we are shaped by algorithms and become the Oligarchs’ messengers.[^8] This is all perhaps best understood as a collection of undoings. When Cory Doctorow talks about &lt;em&gt;enshittification,&lt;/em&gt; this is in part what he is describing: first, the platform is good for users, luring them in with promises of social connection and networking; eventually, it abandons users for businesses, which is why everything eventually gets taken over by advertisements. Ultimately, however, the platform is remade to exploit both businesses and users in order to profit only its owners, like Google, Meta and Amazon.[^9] This is true for profits, as Doctorow shows, but also for the political interests of their owners, beyond direct economic gains, as an extension of the manosphere.&lt;/p&gt;
&lt;p&gt;The internet has slowly been transformed over the decades, so its effects set new baselines for expectations every few years. Today, the “reality” you might have your strongest tether to is an illusory entrenchment in the online world — the “offline” being secondary. And for those suffering from AI psychosis, this secondary reality, of being offline, becomes hugely dissonant with what they have been compelled to feel and believe from company algorithms. Social media and AI chatbots alike are there only to serve the platforms through which they operate; as Ali Alkhatib argues, “we should shed the idea that ‘AI’ is a technological artefact with political features and recognise it as a political artefact through and through.”[^10]&lt;/p&gt;
&lt;p&gt;The Oligarchs know that shocking and sensationalist online content works best for their bottom line by keeping everyone disturbed and entranced.[^11] This is why AI is an easy thing to integrate into the present moment: we have been primed for decades by social media to be distracted and then entertained in quick succession, endlessly. Commercial content moderators sanitise the internet for the average user, so what is left is another layer of psychosocial detritus lodged into the system by companies through black-boxed algorithms, &quot;clippers&quot;, and other types of content-promoting tactics.[^12] These tactics have explicit political aims to shape and manipulate users by not only deeply influencing their beliefs, but also convincing them that those beliefs are their own. This happens in part by marketing social media as more legitimate, free and authentic than other types of media or institutional knowledge — framing it as “real” people, autodidacts, and contrarians fighting against the establishment. But what is happening is that users — and influencers especially — are unwittingly working &lt;em&gt;for&lt;/em&gt; the Oligarchs as entertainers without being able to articulate their own political positions because engagement itself has become the dominant (or only) ideology, reducing the entirety of their worldview to being watched and followed online.&lt;/p&gt;
&lt;p&gt;By 2026, users interact with the content of mostly strangers — human and bots — and advertisers, deemed to be part of a refined and individualised algorithm.[^13] Users often think that they are training their algorithm to show them the types of content they enjoy, but fail to recognise that this process is highly extractive of them — that the platforms are, in fact, training and habituating them. As José Marichal (2025) puts it, “platforms encourage us to produce opinions and content experiences, but algorithms encourage us to classify ourselves through our pursuit of interests by giving us more of what we previously asked for”.[^14] This kind of turn (and turnover) to algorithmic logics has also meant that content itself performs to a proprietary formula — influencers, for example, know to post in ways that are counted and captured by the metrics that drive their content. To be counted is to be made relevant, to have shares and likes and to keep people on your profile to show your popularity. All value comes from what is counted — all of it a measure of something that translates to profit.[^15] Growing one’s audience is somewhat agnostic to the audience’s appreciation of the content, however; this is foremost a “click and ragebait industrial complex”.[^16] Some describe this as giving way to a “dead internet” where “many of the accounts that engage with such content also appear to be managed by artificial intelligence agents [...]” which “creates a vicious cycle of artificial engagement, one that has no clear agenda and no longer involves humans at all.”[^17] From this perspective, the internet is already a wasteland of fake content and interactions, fueled by advertisements for often fraudulent products and distorted political commentary. And this problem is now irreversible because AI slop is embedded into everything online forevermore. There is no disentangling AI-generated content from the rest.&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;But, put like that, who and what does all of this serve? Who or what would want this?&lt;/p&gt;
&lt;p&gt;To answer these questions, it helps to think of AI as having a logic that shapes everything it stands for. The logic is that all its value is extruded and extrapolated from what is quantified. To quantify everything, even things that are not quantifiable. To commodify and codify. So, what we are being sold through all this AI hype is that everything should be reducible to categories and hierarchies, put in the form of measures and statistics for prediction. However, if you understand that counting, measuring and predicting are always situated, and therefore subjective and political, you might not be as excited about such AI-determined futures. You may, in fact, be baffled by the premise of the sales pitch. The bigger AI project asks that you abandon meaning and feeling, to believe in a neutral, objective, all-knowing source built from massively large datasets hosted in hyperscale data centres. AI asks that you buy into the idea that more data means being closer to The Truth. It means that you should want to offload your limited cognition to the super machine. And perhaps most dangerously — that you understand all of this as a scientific endeavour.&lt;/p&gt;
&lt;p&gt;These are the logics that critical scholars have been working against for a very long time, showing how socially constructed the very notions of objectivity and calculability are, and how not everything can (or should) be measured using scientific methods. The rage against DEI and the humanities is part of the ongoing project of building up the logics of AI and metrics as the great authority — the same logics that enable the surveillance and control of workers by tracking their movements, reinvent phrenology by measuring faces and bodies as data points, and dismiss anything that can’t prove its worth in graphs and charts.&lt;/p&gt;
&lt;p&gt;This is why white supremacy, transphobia, and incel culture, alongside anti-intellectualism, anti-scientific authority, anti-expertise thinking, and being against higher education, and the arts and humanities more generally, have been required as groundwork for AI — and why it can be understood as a fascist project.[^18] In other words, it is not a mere coincidence that AI is entering the world now, as it has. Fascism is the ideological choice that tech CEOs make and help shape because it makes them richer, of course, but also because it allows them to manipulate and observe the world at a distance — like a model or a simulation — to see how things play out. It is perhaps best thought of as a dark kink of the Oligarchs — so bored and empty that they shake up the stock market to make themselves feel something… Clearly, it’s worth analysing their motives as political and psychological, because the problems do not necessarily lie with computers, or even with AI, but squarely with &lt;em&gt;them&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;This also explains why there is a particular segment of users of social media and AI that constitute tech bro culture (EA-crypto-&lt;em&gt;looksmaxxing&lt;/em&gt; and incel-raw milk adjacent) that are driven to defend the rich and powerful against any kind of criticism.[^19] The Oligarchs are surrounded by sycophants and lack the ability to meet criticism or even engage with different perspectives. Demographically, the urge to defend these men is a reflex of mostly young white men — this is how the manosphere remains so powerful. Social media platforms have, for decades, been grooming young white men to embrace the message core to tech bro culture: that societal progress is a product of technological advancement, that tech CEOs are the smartest people on the planet, and, more importantly, to also see themselves as future bosses rather than the much more likely scenario of remaining precarious tech workers (if employed at all).&lt;/p&gt;
&lt;p&gt;This has meant more than adopting the idea that technology determines the future; it has also meant actively dismissing the social and political aspects of that same future. One way they have done this is by linking specific demographics to particular kinds of content and offloading all responsibility for users’ engagement with platforms. There is very little that platforms are legally accountable for. They perpetuate the idea that responsibility is personal rather than addressing governance, policy, or systemic oppression. They create a structural inability to understand humanity as a collective in favour of the self-made individual framing — rhetoric that stems from tech CEOs, the manosphere, wellness influencers, but also universities that have long pushed for everything to be “entrepreneurialism” as they have abandoned the mission of teaching for just futures — in favour of teaching for an imaginary future workforce for extractive or military industries that further entrench these nihilistic-political beliefs.&lt;/p&gt;
&lt;p&gt;We need only to look at the political right’s coordinated fight against critical race theory, gender studies, and the violence perpetuated against trans people to see the threat they pose to a tech-forward narrative. They are a threat because they challenge the foundations of universality and objectivity required for the generative AI industry to be perceived as unbiased and separate from politics and the idea that everything can be measured, controlled, and predicted. And, most importantly, that the Oligarchs — the “master race” at the top of the food chain — are the best positioned to take on this project and determine next steps.&lt;/p&gt;
&lt;p&gt;Perhaps nothing illustrates this more flagrantly than the recently released batch of Epstein emails that show his and his network’s obsession with immortality, “the singularity”, and other AI investments that would build up a new era of datafied eugenics. All evidence points to AI being funded and powered by the ultra-rich for their own pleasure and to the benefit of the manosphere. Just like there is no such thing as the “invisible hand of the market” that shapes and balances the needs and wants of the people, there is also no version of AI that is simply a repository of human knowledge to draw from as a commons. Instead, AI under fascist capitalism remakes the entire world into Epstein’s island.&lt;/p&gt;
&lt;p&gt;From that perspective, it matters a lot less what tricks AI bots are able to accomplish, or what new gadgets come to market, or how you justify your own use of ChatGPT. None of that matters at all. We often turn to computer scientists and engineers to tell us what AI is or what the future will look like, when, in fact, what we are seeing is a massive social experiment conducted in plain sight by tech companies. This is where discussions of AI need to be. Specifically, we need critical scholars and activists who are able to make sense of AI as a cultural moment and as a political text. Because what we are witnessing is a collective psychosocial phenomenon that has more to do with humans than machines — more with whiteness and masculinity. All the machines do is make predictions at scale. This ability to predict words and narrate or depict them in human-like terms is impressive, no doubt, but what is much more interesting and difficult to understand is how deeply disturbed and enchanted some people have become with a relatively simple technological idea at scale. If humans aren’t able to process this illusion — like trauma — it overwhelms them.&lt;/p&gt;
&lt;p&gt;Interestingly, if we flip this script, we could argue that trillion dollar investments into a largely unreliable or unproven prototype, like generative AI, is also a form of grand delusion — a different type of affliction of the rich and powerful that remains largely unaddressed because wealth and power are the main determinants of what get to be (and be endured), but also because they are surrounded by sycophants who validate every idea. AI is a bad idea. Because the rich control everything, and because those who oppose them stand to be humiliated, demoted, or destroyed, we tend not to name the follies of the powerful. We fail to diagnose a societal illness like what is happening right now with the global buy-in to AI. But let it be noted here: the delusions of the Oligarchs might at first manifest as infrastructure, but we will later have to analyse them as societal ruins. So, it might be easier to act now — to resist the data centre buildouts that usher in their worldviews — than to rebuild a society that has collapsed from a failed social experiment.&lt;/p&gt;
&lt;p&gt;[^1]:  Jide Ehizele, “AI decline porn is a distortion of modern Britain”, &lt;em&gt;UnHerd&lt;/em&gt;, 22 February 2026.&lt;/p&gt;
&lt;p&gt;[^2]:  Seemingly private and seemingly all-knowing.&lt;/p&gt;
&lt;p&gt;[^3]:  Marco Quiroz-Gutierrez, “‘Country of geniuses in a data center’: Every AI cluster will have the brainpower of 50 million Nobel Prize winners, Anthropic CEO says”, &lt;em&gt;Fortune&lt;/em&gt;, 27 January 2026.&lt;/p&gt;
&lt;p&gt;[^4]:  You can (kind of) switch off this feature, but the experience is much less enjoyable.&lt;/p&gt;
&lt;p&gt;[^5]:  Consensus reality means something like debating and settling ideas over and over again.&lt;/p&gt;
&lt;p&gt;[^6]:  Georgia Wells, “OpenAI Employees Raised Alarms About Canada Shooting Suspect Months Ago&lt;/p&gt;
&lt;p&gt;[^7]:  Thomas McMullan, “What does the panopticon mean in the age of digital surveillance?”, &lt;em&gt;The Guardian,&lt;/em&gt; 23 July 2015.&lt;/p&gt;
&lt;p&gt;[^8]:  Germain Gauthier, Roland Hodler, Philine Widmer &amp;amp; Ekaterina Zhuravskaya, “The political effects of X’s feed algorithm”, &lt;em&gt;Nature&lt;/em&gt;, 2026.&lt;/p&gt;
&lt;p&gt;[^9]:  “WHO BROKE THE INTERNET? Understood”, &lt;em&gt;CBC News&lt;/em&gt;, 18 August 2025.&lt;/p&gt;
&lt;p&gt;[^10]:  Ali Alkhatib, &quot;Defining AI&quot;, 6 December 2024.&lt;/p&gt;
&lt;p&gt;[^11]:  Businesses are now having to influence the data ingested for training AI chatbots so that their products can be part of the synthetic output (as a kind of product placement, or as a way to create a need for something). See: Erin Griffith, “Chatbots Are the New Influencers Brands Must Woo”, &lt;em&gt;The New York Times&lt;/em&gt;, 17 February 2026.&lt;/p&gt;
&lt;p&gt;[^12]:  Boaz Sobrado, “Inside The &apos;Clipping Farms&apos; Driving Fintech&apos;s Marketing Boom”, &lt;em&gt;Forbes&lt;/em&gt;, 11 February 2026.&lt;/p&gt;
&lt;p&gt;[^13]:  Targeted advertising and dynamic pricing are other parts of this evolution.&lt;/p&gt;
&lt;p&gt;[^14]:  José Marichal, &lt;em&gt;You Must Become an Algorithmic Problem: Renegotiating the Socio-Technical Contract&lt;/em&gt;, 2025.&lt;/p&gt;
&lt;p&gt;[^15]:  If you’re in your 20s, you’ve been online your entire life and the internet has probably shaped you more than your friends, family or community have. If you’re in your 50s or older, your life has been divided between pre- and post-Internet, which is its own perspective: arguably the last of humankind to know offline life at all.&lt;/p&gt;
&lt;p&gt;[^16]:  “Decline porn explained, and why Clavicular is misunderstood”, &lt;em&gt;BBC Top Comment,&lt;/em&gt; 20 February 2026.&lt;/p&gt;
&lt;p&gt;[^17]:  Jake Renzella and Vlada Rozova, “The ‘dead internet theory’ makes eerie claims about an AI-run web. The truth is more sinister”, &lt;em&gt;The Conversation&lt;/em&gt;, 19 May 2024.&lt;/p&gt;
&lt;p&gt;[^18]:   Alina Snisarenko, “Ford tells students to not pick &apos;basket-weaving courses&apos; in wake of OSAP cuts”, &lt;em&gt;CBC News&lt;/em&gt;, 17 February 2026; Dan McQuillan, &lt;em&gt;Resisting AI An Anti-fascist Approach to Artificial Intelligence&lt;/em&gt;, 2022; Tim Bousquet, “AI is fascism”, &lt;em&gt;Halifax Examiner&lt;/em&gt;, 1 October 2025.&lt;/p&gt;
&lt;p&gt;[^19]:  This is a badly written piece written by a known “effective altruist” attempting (and failing) to take down critiques of AI: Dan Kagan-Kans, “The left is missing out on AI”, &lt;em&gt;Transformer&lt;/em&gt;, 16 February 2026.&lt;/p&gt;
</content:encoded></item><item><title>The Red Herring Has Fangs</title><link>https://disjunctionsmag.com/articles/fanged-red-herring</link><guid isPermaLink="true">https://disjunctionsmag.com/articles/fanged-red-herring</guid><description>Digital sovereignty as nationalist camouflage</description><pubDate>Wed, 04 Mar 2026 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;&lt;em&gt;And on digital sovereignty, we are also very clear that what is forbidden offline is forbidden online. And we will not flinch at that. We will be very steadfast to pursue this.&lt;/em&gt;
 — Ursula von der Leyen, Munich Security Conference 2026

If one were to analyse the concepts that have taken hold of digital policy spaces today, a particularly dominant and contested theme is that of &lt;em&gt;digital sovereignty.&lt;/em&gt; The importance of digital sovereignty has often been underlined in both dominant and emerging nations, and has consequently become part of a shared global vocabulary, used by civil society and working class movements in responding to the current moment in global capitalism. Both as a descriptor and as a strategy, digital sovereignty has not meant one specific thing; yet, despite the variations in usage, it has a core that is coherent enough to examine. It is important that we do so, and that we reflect upon the political consequences of reifying this category.&lt;/p&gt;
&lt;p&gt;The notion of sovereignty itself is, of course, both older and broader, emerging in its current form together with the birth of the modern nation-state. Historically, the term has been used to refer to sole and absolute power over a territory — often under the control of a state. Today, this territoriality creates an inherent tension, since a large part of digital infrastructure necessarily needs to cross state borders in order to function. The actual, concrete nature of the technology involved makes it clear that patches of the digital &lt;em&gt;cannot&lt;/em&gt; be under the control of states in the absolute sense, and that total digital insulation has simply never been possible.&lt;/p&gt;
&lt;p&gt;In relation to the digital, then, one of the earliest uses of the concept of sovereignty was  John Perry Barlow&apos;s 1996 manifesto.[^1] In it, Barlow rather optimistically declares the existence of an entity called &lt;em&gt;cyberspace&lt;/em&gt;, where nation-states are unwelcome and have no sovereignty. Unsurprisingly, the principles of this proclamation would not sit right with nation-states. Thus, as the Internet evolved, they would go on to insist that national sovereignty over the digital realm (and especially over data flows) was intrinsically virtuous, and that the nature of the Internet and its contents should be decided upon by nation-states.[^2] The strongest versions of this framing would end up advocating for the outright fragmentation of the Internet along national lines.[^3]&lt;/p&gt;
&lt;p&gt;Manifestos and cyber-utopianism aside, digital sovereignty’s concept creep has today gone beyond the simple idea of state control over digital infrastructure or artefacts, and beyond older debates around data localisation.[^4] On the one hand, it has expanded to notions of state-led self-sufficiency: via the national capitalist champions of the digital realm, of course. And on the other, it has broadened to include notions of collectives and individuals having a certain degree of control over the digital — generally in order to protect some legal or notional right, from some form of digital encroachment on the part of (foreign) private actors.[^5] This concept creep is a theoretical dilution, given that any ‘sovereignty’ that an individual exercises is through the coercive power of the state, and given that, conversely, state sovereignty built on the basis of an open alliance with national capital is contradictory to the ability of a state to regulate as a representative of the individual.&lt;/p&gt;
&lt;p&gt;Especially in practice, this expanded notion of sovereignty is a bag of contradictions. A person exercising individual control over their patch of the digital — motivated by their individual security, privacy, or economic interests — will have goals that are diametrically opposed to those of a state exercising “sovereignty” over some subset of the digital for developmentalist interests. Digital sovereignty in the former case can be framed in terms of liberal individualism or rights, and in the latter case, in terms of nationalism or self-sufficiency. This problem of sovereignty is further muddied when the notion of digital community rights is considered, given that a large part of digital infrastructure is made and held as a commons.[^6] Indeed, there are sharp contradictions between the interests of communities engaged in the production and reproduction of digital artefacts (such as data) and those of states and capital. Digital institutions that are sufficiently independent of states yet powerful enough to represent the interests of such communities necessarily complicate any notion of digital sovereignty.&lt;/p&gt;
&lt;p&gt;None of this is unique to the digital realm. The idea of control that is not by states; of control that is not necessarily over territory; or of control that is not absolute, but limited, negotiated, and constrained in practice; has been referred to in terms of sovereignty, for a variety of reasons. Take, for instance, the vague categories of “seed sovereignty” and “body sovereignty” — categories that have existed and have been used as political ammunition in the three-sided war between individual interests, the pressures of capital, and the interests of the state.[^7] Linguistically speaking, the word sovereignty lends itself to a specific weaponisation, where it is used to legitimise some future exercise of power, often by weaker actors asserting control against some sort of hegemonic power.[^8]&lt;/p&gt;
&lt;p&gt;National sovereignty of the old-fashioned kind, too, has, in reality, always been fairly constrained. Capital does not respect national policies beyond a limit, and nation-states exercising self-determination over such policies have historically accommodated capital while policing labour.[^9] In short, sovereignty under capitalism has never been an analytically clean category.&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;Digital sovereignty as a concept has seen a sudden surge in usage recently due to the spread of a certain novel digital technology, referred to in public discourse under the market term &lt;em&gt;artificial intelligence&lt;/em&gt;, and the consequent production of the new analytical category of &lt;em&gt;AI sovereignty&lt;/em&gt;. Like digital sovereignty, the concept is vaguely and contentiously defined; yet at its core, it is used to refer to a state’s capacity to regulate (or even just understand) AI systems deployed within its territory.[^10] This discourse is important, as leading firms in the AI industry have disproportionate influence — having trained their models on snapshots of the internet, they are capable of concentrating immense wealth and power, and of exploiting data and information which they technically do not own.[^11] This enclosure of the digital commons to feed the data requirements of continuously evolving models is an ongoing process; a sort of primitive accumulation whose power is as material as it is discursive. And even beyond the usual collaboration between states and global capital, states have to go out of their way to accommodate AI firms, while seemingly getting not much in return, at least in the immediate future. Frontier AI systems are massively expensive and often wasteful efforts, with no straightforward path to profit. This compels states to use coercive power to enclose vast amounts of data and to invest in acquiring or building computing infrastructure, often without guarantees on the benefits of scale. In addition, they are compelled to enact policies that generate and discipline the sophisticated labour force that this industry needs to exploit.&lt;/p&gt;
&lt;p&gt;Why do states go along with this? First, it is important to highlight that the attraction to AI is not &lt;em&gt;solely&lt;/em&gt; due to states being the dupes and catspaws of digital capital. Digital capital has, after all, proved to be immensely useful for securitisation, and states have used the obfuscatory power of AI-based tools to police welfare provisions, enhance tax collections, and engage in mass surveillance — and to normalise all of the above by enhancing their own discursive ability using the same tools. This normalisation is evident in how judicial and police authorities all over the world have enthusiastically begun to adopt AI policies that convert legal rights into services. AI firms are happy to sell their services for border enforcement or for military use. The data obtained from people across the globe, ostensibly for innocent uses like multilingual research, feeds into AI models, which, in turn, feeds into the violence of capital.[^12] Digital sovereignty here becomes a synonym for state capacity and a justification for the rapid ingress of these technologies into civic life, ignoring the structures of ownership and control of the technology, all the while keeping the base of these technologies firmly within global digital capitalism. The cost in resources and in control over everyday life is borne by the working class.&lt;/p&gt;
&lt;p&gt;Beyond securitisation, a popular idea pushed by AI firms has been that being “left behind” in AI research is a strategic blunder, both economic and geopolitical. AI firms have also gone out of their way to push a narrative that states &lt;em&gt;need&lt;/em&gt; to feed into frontier AI systems, in order to convince state actors that their future power (or even existence) depends upon getting on the AI bus. And thus does “digital sovereignty” once again modify its meaning, this time referring to national self-sufficiency in the realm of AI. And since AI is a vague market term, the way this plays out is as a race after models, data centres, and imported compute. This is even visible at a subnational level, where promises of future prosperity can induce local governments to sink their revenues, and to tie the economic interests of their residents to the expansion of resource-intensive data centres.[^13]&lt;/p&gt;
&lt;p&gt;This admirable spirit of sacrifice is not just reserved for nation-states, compelled to sacrifice their national interests upon the altar of AI. In pushing this narrative, these tech firms themselves end up becoming the biggest champions of digital sovereignty, apparently entirely opposed to their own profit interest. Take, for instance, NVIDIA CEO Jensen Huang’s enthusiasm about the idea of sovereign AI, or OpenAI’s Stargate program in the UAE.[^14]&lt;/p&gt;
&lt;p&gt;At a surface level, this curious phenomenon of AI firms championing nationalism can be explained as an attempt by technology firms to take over the discursive terrain of digital sovereignty. Tech giants like Alphabet, Amazon, and Microsoft initiate programs offering states sovereign control over their digital infrastructure and programmes, and undertake self-regulatory actions like &lt;em&gt;Digital Sovereignty Pledges&lt;/em&gt; as a selling point for using their services — essentially creating a new category of sovereignty-as-a-service.[^15] In doing so, they also reframe the meaning of sovereignty once again. Sovereignty now becomes a product that a state can buy, as a family of technology that these firms can vet and sell. Not only is sovereignty in this setting ideologically designated as &lt;em&gt;private&lt;/em&gt; property — it is private property that you can rent only from a small and closed ecosystem of technology giants.&lt;/p&gt;
&lt;p&gt;Yet at a deeper level, this phenomenon is not just discursive but material. It is the newest and sharpest iteration of a well-recognised dynamic in which nationalism and the “national interest” ultimately serve global capitalism. When Nvidia promises to build chips or train developers for a nation-state, for instance, they also expect a commitment to buying their AI models and cloud services.[^16] These models benefit from the data harvested in the country being offered to this sovereign AI, but in a broader sense, they also promote vendor lock-in into systems that can intensify the exploitation of workers in these territories.&lt;/p&gt;
&lt;p&gt;This has two immediate causes. First, AI systems are rather technically complex, and competition across the entire  AI stack is effectively impossible. This means that when nation-states push for AI sovereignty, they immediately require more software, technical know-how, and chips, and often rely upon imports to acquire them. Downstream, this leads to the prioritisation of policies that subordinate workers&apos; interests to the development of an AI ecosystem. This, in turn, benefits global digital capital.&lt;/p&gt;
&lt;p&gt;Second, the actual, specific knowledge of workers who now have to join this AI ecosystem in pursuit of sovereignty ends up being used to build and refine models. And this, in turn, reduces workers’ collective bargaining power, as they slowly march towards their own obsolescence.[^17]&lt;/p&gt;
&lt;p&gt;Finally, once normalised, these tendencies intensify into an “AI race” between “AI superpowers”, where collaborations between AI firms and nation-states allow states immense latitude to spend resources, and to marshal power over local digital infrastructure, all in the name of nationalism and future greatness. The interests of workers and wage relationships — together with other structural concerns like the nature of the economy, environmental concerns and so on — all turn into secondary political concerns. And all the while, the inconvenient fact remains: digital capital is happy to work across borders, using the idea of national self-sufficiency to make itself larger, sell its own inevitability, and sell products and services that help embed it more firmly, while providing returns that are questionable at best.&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;It is important to acknowledge two realities. The first is that the production, ownership, and control of digital systems — both the physical technology and the capital that finances it — are transnational. Any attempt to separate “real” technology from “fake” finance to produce the former locally is social-democratic magical thinking. This does not mean that policy is impossible: just that there are hard constraints on the production of digital systems, and that pushing for regulations and contextual bans is no more unrealistic than blindly “getting on the bus”. The second reality is that — as any honest evaluation of the past decade, and of industry realities after generative AI makes clear — the idea of tech nationalism, just like economic nationalism in general, fails to pose an effective challenge to the reality of capital, which will happily work with nationalist projects, and even benefit from national rivalries.[^18] Yes, there may once have been an honest use of digital sovereignty as an emancipatory framework (although, as we have seen, always amongst many contending uses). The logic of localisation and tech nationalism may once have been used as weapons against the dominant vision of free data being a resource for digital capital to plunder without pause. Today, however, the small ways in which so-called sovereignty has been useful in the digital realm have been when the state has been elided as the primary subject of sovereignty, shifting the subject to individual persons or to communities, who are in their turn defended from above by activists or lawyers, or from below by radical, organised workers.&lt;/p&gt;
&lt;p&gt;Ultimately, these battles are better framed in the language of workers’ rights, democracy, socialism, and universal standards. Digital capital is international, and any response to it which is not international will be a vicious red herring. Digital sovereignty, as a category, mystifies more than it explains — and what it mystifies is politically the opposite of the outcome that it purports to champion.&lt;/p&gt;
&lt;p&gt;[^1]:  John Perry Barlow, “A Declaration of the Independence of Cyberspace”, &lt;em&gt;Electronic Frontier Foundation&lt;/em&gt;, 8 February 1996.&lt;/p&gt;
&lt;p&gt;[^2]:  Dakota Cary, “Community watch: China’s vision for the future of the internet”, &lt;em&gt;Atlantic Council&lt;/em&gt;, 4 December 2023.&lt;/p&gt;
&lt;p&gt;[^3]:  Dana Polatin-Reuben and Joss Wright, “An Internet with BRICS Characteristics: Data Sovereignty and the Balkanisation of the Internet”, &lt;em&gt;4th USENIX Workshop on Free and Open Communications on the Internet (FOCI 14)&lt;/em&gt;, 2014.&lt;/p&gt;
&lt;p&gt;[^4]:  Chiara Del Giovane, Janos Ferencz, and Javier López González, &quot;The nature, evolution and potential implications of data localisation measures&quot;, &lt;em&gt;OECD Trade Policy Papers&lt;/em&gt;, 2023.&lt;/p&gt;
&lt;p&gt;[^5]:  Stephane Couture and Sophie Toupin, &quot;What does the notion of ‘sovereignty’ mean when referring to the digital?&quot;, &lt;em&gt;New Media &amp;amp; Society&lt;/em&gt;, 2019.&lt;/p&gt;
&lt;p&gt;[^6]:  Michael Max Bühler et al., &quot;Unlocking the power of digital commons: Data cooperatives as a pathway for data sovereign, innovative and equitable digital communities&quot;, &lt;em&gt;Digital&lt;/em&gt;, 2023.&lt;/p&gt;
&lt;p&gt;[^7]:  Jack Kloppenburg, &quot;Re-purposing the master’s tools: the open source seed initiative and the struggle for seed sovereignty&quot;. In Marc Edelman, ed. &lt;em&gt;Critical Perspectives on Food Sovereignty&lt;/em&gt;, 2017; Michelle Murphy, &lt;em&gt;Seizing the means of reproduction: Entanglements of feminism, health, and technoscience&lt;/em&gt;, Duke University Press, 2020.&lt;/p&gt;
&lt;p&gt;[^8]:  Wouter G. Werner and Jaap H. De Wilde, &quot;The endurance of sovereignty&quot;, &lt;em&gt;European Journal of International Relations&lt;/em&gt;, 2001.&lt;/p&gt;
&lt;p&gt;[^9]:  William I. Robinson and Xuan Santos, &quot;Global capitalism, immigrant labor, and the struggle for justice&quot;, &lt;em&gt;Class, Race and Corporate Power&lt;/em&gt;, 2014.&lt;/p&gt;
&lt;p&gt;[^10]:  Luca Belli, &quot;Exploring the key AI sovereignty enablers (KASE) of Brazil, towards an AI sovereignty stack&quot;, &lt;em&gt;Annual Conference of the Global Internet Governance Academic Network&lt;/em&gt;, 2023.&lt;/p&gt;
&lt;p&gt;[^11]:  Jeffrey Cheng et al., &quot;Dated data: Tracing knowledge cutoffs in large language models&quot;, &lt;em&gt;arXiv,&lt;/em&gt; 2024.&lt;/p&gt;
&lt;p&gt;[^12]:  Charles Rollet, “Cohere is quietly working with Palantir to deploy its AI models”, &lt;em&gt;TechCrunch&lt;/em&gt;, 16 December 2024.&lt;/p&gt;
&lt;p&gt;[^13]:  Ellen Thomas, “Meta&apos;s data center could be &apos;transformative&apos; for Louisiana, utility says — as long as customers pay the $5 billion power bill”, &lt;em&gt;Business Insider&lt;/em&gt;, 25 April 2025.&lt;/p&gt;
&lt;p&gt;[^14]:  Brian Caulfield, “NVIDIA CEO: Every Country Needs Sovereign AI”, &lt;em&gt;NVIDIA Blog,&lt;/em&gt; 12 February 2024; Stephen Nellis, “&apos;Stargate UAE&apos; AI datacenter to begin operation in 2026”, &lt;em&gt;Reuters&lt;/em&gt;, 22 May 2025.&lt;/p&gt;
&lt;p&gt;[^15]:  Rafael Grohmann and Alexandre Costa Barbosa, &quot;Sovereignty-as-a-service: How big tech companies co-opt and redefine digital sovereignty&quot;, &lt;em&gt;Media, Culture &amp;amp; Society&lt;/em&gt;, 2025.&lt;/p&gt;
&lt;p&gt;[^16]:  Dean Takahashi, “Nvidia CEO touts India&apos;s progress with sovereign AI and over 100K AI developers trained”, &lt;em&gt;VentureBeat&lt;/em&gt;, 24 October 2024.&lt;/p&gt;
&lt;p&gt;[^17]:  Kaya Genç, “Desperate for work, translators train the AI that’s putting them out of work”, &lt;em&gt;Rest of World&lt;/em&gt;, 20 February 2025.&lt;/p&gt;
&lt;p&gt;[^18]:  Jamie Merchant, “Fantasies of Secession: A Critique of Left Economic Nationalism”, &lt;em&gt;The Brooklyn Rail&lt;/em&gt;, February 2018.&lt;/p&gt;
</content:encoded></item><item><title>Beyond Autonomy</title><link>https://disjunctionsmag.com/articles/beyond-autonomy</link><guid isPermaLink="true">https://disjunctionsmag.com/articles/beyond-autonomy</guid><description>Personalised wages as alienation</description><pubDate>Mon, 23 Feb 2026 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;Advocates, academics, and policymakers alike have increasingly raised digital manipulation — the attempt to influence digital users’ behaviours and decision-making — as a cause for concern. This problem is primarily discussed in the context of consumers, such as with the Cambridge Analytica scandal, or with personalised advertising to teens on Instagram.[^1] But this manipulation does not stop at the consumer, but also affects &lt;em&gt;workers&lt;/em&gt;, whose employees increasingly subject them to an array of digital management techniques at work. A particularly salient example of this is Uber, who have — perhaps more than any other company — leveraged such digital manipulation as a labour management technique. Because Uber insists that its workers are not employees but independent contractors (a view reflected in labour regulation throughout the United States), it cannot directly control drivers’ schedules or routes. Instead, the company has devised a range of techniques, borrowing from behavioural sciences, to covertly manipulate drivers into working at certain times and in certain areas.[^2] These include tactics such as sending drivers carefully crafted texts and pop-ups to keep them on the road; automatically queuing rides; or sending drivers push notifications that attempt to convince them to keep working whenever they try to log off.&lt;/p&gt;
&lt;p&gt;Critiques of Uber’s labour practices are widespread. Often, scholars and advocates have grounded their arguments against these practices in appeals to drivers’ autonomy.[^3] While self-determination is always constrained under waged labour, app-based employment holds the promise of expanded freedom and choice, and many choose to drive for Uber because it lets them decide when and where they work. But Uber’s use of digital manipulation, or so the argument goes, diminishes individuals’ ability to freely make these decisions. However, using the framework of autonomy to understand how Uber uses digital manipulation limits how such techniques of manipulation can be understood — and challenged. Instead, Uber’s practices are better described through the theory of alienation.&lt;/p&gt;
&lt;p&gt;In a recent, widely-cited legal paper, Daniel Susser, Beate Roessler, and Helen Nissenbaum argue that Uber’s digital management techniques are harmful because they degrade workers’ autonomy. The authors define autonomy as “the capacity to make one’s own choices, with respect to both existential and everyday decisions.”[^4] They posit that individuals can, for the most part, rationally deliberate and act according to the reasons they think are best, and that digital manipulation subverts this individual decision-making power, thus undermining autonomy. This degradation of autonomy is of grave concern to the authors. To them — due to the relationship between independent decision-making and democratic institutions — autonomy lies at the very normative core of liberal democracy. In their understanding, then, autonomy takes the role of a &lt;em&gt;necessary&lt;/em&gt; background condition to well-functioning liberal society.&lt;/p&gt;
&lt;p&gt;To understand this conceptualisation of autonomy and its relationship to digital manipulation, we must examine the intellectual tradition from which they draw. Indeed, the principle of autonomy is a cornerstone of traditional liberal thought. John Stuart Mill, one of the most influential liberal philosophers, describes autonomy as essential to human flourishing, a key element of personal well-being. In his so-called &lt;em&gt;liberty principle&lt;/em&gt;, he argues that an individual’s “self-regarding thoughts and actions” ought to be protected from interference. Mill’s critique was aimed at state interference rather than at the private sector; but his liberty principle certainly echoes autonomy-based critiques of Uber’s digital manipulation, which argue that such practices interfere with individuals’ decision-making and therefore undermine self-determination and well-being.&lt;/p&gt;
&lt;p&gt;The autonomy argument is convincing — in large part because it reflects the documented experiences of Uber drivers. Indeed, in legal scholar Veena Dubal’s ethnographic work with Uber and Lyft drivers, she finds that drivers frequently report a lack of control and autonomy.[^5] As one driver explains: “It really feels like you are being manipulated [by Uber]... it literally feels like you’re being punished by some unknown spiteful god.” This experience of manipulation is at odds with the promises of app-based work. The classification of drivers as independent contractors means, according to the U.S. Internal Revenue Service, that they “have the right to control or direct the result of their work”.[^6] In practice, however, they report a very different experience. As another explained to Dubal, “It’s like being gaslit every day, being told you are independent and being manipulated in all these different ways. Every single day, they are figuring out how to exploit you in different ways.”&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;Despite capturing some aspects of drivers’ affective experiences, the autonomy-based critique has severe limitations that become clear when Uber’s business model is examined more closely.&lt;/p&gt;
&lt;p&gt;By identifying unfreedom as the primary harm caused by digital manipulation, Susser et al. presuppose that Uber drivers would enjoy freedom if they were not subject to these practices. Without Uber’s use of digital manipulation, it is implied, drivers could act as entrepreneurs with the capacity to make their own choices. This assumption is, of course, starkly at odds with the experience of waged labour under capitalism, in which a worker — having nothing to sell but their labour-power — is entirely dependent upon doing so to survive. The focus on unfreedom is also entirely at odds with how Uber’s business functions. Digital manipulation is not simply a technique that Uber occasionally uses to adjust drivers’ behaviour; it is, rather, central to how the company secures profits. Uber’s very profit-model, that is to say, relies upon the ability to render drivers &lt;em&gt;de facto&lt;/em&gt; employees, by exerting control over where, when, and for how much they work — all the while evading the financial and legal responsibilities of direct employment.&lt;/p&gt;
&lt;p&gt;In the U.S., rideshare companies have worked tirelessly to ensure that their drivers are classified either as independent contractors or as “third-category” workers, an employment status that falls between employee and independent contractor. This allows Uber and other companies to exert power similar to that of an employer (such as by slowing down rides offered or locking drivers out of the app to effectively control their work time), while shifting employment costs and responsibilities (like minimum wages, paid leave, and benefits) onto workers.&lt;/p&gt;
&lt;p&gt;U.S. employers have long used race to justify differential worker rights. For example, the Fair Labor Standards Act initially excluded domestic and farm workers — professions dominated by Black and immigrant workers — from minimum wage protections. This carve-out, which lasted for decades, essentially legalised lower pay for racialised sectors of the economy. It was only following persistent pressure from social and labour movements that Congress amended the FLSA, in the late 1960s. Uber continues this legacy of treating racialised workforces as second-class employees, who are controlled like actual employees but receive none of the protections.[^7] Subal notes that the rideshare workforce is made up primarily of immigrants and people of colour. “But rather than addressing racial inequalities by improving the precarious working conditions of their primarily people-of-colour workforce”, she argues, “the rideshare companies Uber and Lyft have used the existence of these inequalities as a resource to justify and legalise their business model” — via independent worker classification, that is to say. As was the case with agricultural and domestic workers, the racial makeup of the rideshare workforce has everything to do with their (mis)classification as independent contractors — a classification that constitutes yet another chapter in U.S. labour law’s history of racial exclusion.&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;Rather than autonomy, alienation offers a more accurate and politically potent framework for understanding the digital manipulation of labour, helping to reveal how this practice threatens not just drivers’ autonomy, but also their economic security, and their capacity to see themselves as part of a collective. In Marxist theory, alienation describes a structural, objective condition under a particular political-economic system — capitalism, that is — rather than simply being a subjective experience. In the &lt;em&gt;Grundrisse&lt;/em&gt;, Marx describes how, under capitalist production, working people are separated from both the process and product of their labour. This separation has profound consequences. As workers, we must sell our labour-power to access the material basis necessary for survival. We are “doubly free”: free to sell our labour-power, and free to otherwise starve. Those who own the means of production dispossess us of the products of our own labour, transforming such products into commodities, the sale of which yields more capital. Our labour, therefore, results in an ever-increasing productive power for capitalists. Meanwhile, our daily life as workers holds no relation to our desires, no relation to our self-expression, and no relation to who we are or might try to become — it is alien work. And with such work as the central organising principle of capitalist society, we become alienated from ourselves and from others.&lt;/p&gt;
&lt;p&gt;Unlike autonomy, alienation foregrounds the material relations between the worker and the owner of the means of production. Whereas autonomy focuses solely on an individual worker’s affective experience, alienation links it to workers’ &lt;em&gt;collective&lt;/em&gt; exploitation, and to &lt;em&gt;capitalists’&lt;/em&gt; accumulation of wealth. Consider, for instance, Uber’s use of personalised wages — one of its most powerful digital manipulation techniques. Over the past five years, Uber and other rideshare companies have begun to use driver, consumer, and other contextual data to generate targeted payment offers calculated by means of a highly opaque algorithm. An experiment by media outlet &lt;em&gt;More Perfect Union&lt;/em&gt; showed how Uber offered drivers in the exact same location different rates for the same rides — proving that rates are, at least in part, calculated using individual drivers’ behavioural data.[^8] This practice is enabled by the surveillance and legal infrastructure that surrounds gig work. Since rideshare drivers are not considered employees, Uber does not need to comply with minimum wage laws. Further, both drivers and passengers are subject to the tracking of their location, their transactions, and their behavioural patterns. This information allows Uber to calculate a personalised wage, which is essentially the lowest possible payment that they can get a particular driver at a particular moment in time to accept. As Dubal explains: “individual workers are paid different hourly wages — calculated with ever-changing formulas using granular data […] — for broadly similar work.”[^9]&lt;/p&gt;
&lt;p&gt;Like Uber’s other digital manipulation techniques, personalised wages are hidden from drivers, an opacity that is arguably intentional. Black-box pay algorithms make it more difficult for drivers and regulators to hold companies accountable to fair and transparent pay standards, while also enabling Uber to adaptively manipulate drivers’ behaviour, by identifying the lowest rate at which a driver will still accept a ride.[^10] While the precise formula remains hidden, the purpose is clear: to exploit a driver’s vulnerabilities and incentivise behaviour that benefits the company.&lt;/p&gt;
&lt;p&gt;Seen through the lens of autonomy, Uber’s use of personalised wages to influence driver behaviour is problematic primarily because it threatens a driver’s decision-making power. The political economy of this practice, however, is beyond the scope of critique. But clearly, personalised wages are first and foremost a &lt;em&gt;material&lt;/em&gt; practice that results in lower wages for the collective body of drivers, and higher profits for Uber. As scholar Zephyr Teachout has argued, these personalised wages function as a tool for wealth transfer.[^11] The resulting wages for drivers are abysmal. One study found that after expenses, drivers take home an average of $6.20 per hour.[^12] Uber’s CEO Dara Khosrowshahi, meanwhile, earns nearly $40 million every year.[^13]&lt;/p&gt;
&lt;p&gt;A framework of analysis based on alienation helps to explain how personalised wages continuously reinforce Uber’s ability to extract wealth from drivers. The separation of workers from the product and process of their labour means that the more they work, the more surplus-value they generate for capitalists. The more a driver works, then, accepting personalised payment rates for rides, the more data Uber can collect, and the more it can finetune its wage-targeting systems. This data includes not just ride transactions and ratings, but also everything from how quickly a driver brakes to how frequently they stop, and for what and where. This information populates driver profiles, which the company can use to target wages to match perceived driver incentives.[^14] Thus, the harder drivers work, the more personalised their wages become; and the more personalised their wages become, the harder they must work. As Dubal found in her ethnographic work, the longer drivers worked, the lower their hourly wages would fall. Personalised wages, Uber’s profit, and the impoverishment of the workers are thus recursively linked.&lt;/p&gt;
&lt;p&gt;Within a framework that only recognises the individual’s loss of power, we are also unable to see how digital manipulation functions as a tool of de-collectivisation — constraining the ability of workers to see themselves as part of something larger, and to construct a collective form of autonomy. Dubal describes how Uber drivers often notice that they earn different amounts than their peers, even when they drive roughly the same routes and hours. These differences in pay generate feelings of individual failure and shame. But they also work to atomise workers, corroding the social ties on which collectivisation and organised resistance depend. As one driver told Dubal: “Any time there’s some big shot getting high payouts, they always shame everyone else and say you don’t know how to use the app.”&lt;/p&gt;
&lt;p&gt;Personalised pay pits driver against driver — &lt;em&gt;the war of all against all&lt;/em&gt;. It thwarts efforts to build solidarity and organise. When workers are separated from the process of labour and thrust into algorithmic silos, they can only relate to other workers as adversaries, or tools in their pursuit of wages. Their capacity to relate to one another as potential collaborators is inhibited, and their efforts to organise are thus stymied. This group-level effect is not simply a composite of individual harms, but a result of something inherently relational.&lt;/p&gt;
&lt;p&gt;Ultimately, drivers continue to agitate and organise — as have the generations of workers that have come before them, confronted with the technological impositions of their own era.[^15] As Sergio Bologna once stated: “Every attempt to dissolve class identity through fragmentation, individualisation, and dispersion ends by producing new subterranean forms of collective behavior, invisible until they suddenly erupt.” What form this resistance will take might be difficult to anticipate in our present moment — but if the history of class struggle tells us anything, it is that it is inevitable.&lt;/p&gt;
&lt;p&gt;[^1]:  Nicholas Confessore, “Cambridge Analytica and Facebook: The Scandal and the Fallout So Far”, &lt;em&gt;The New York Times&lt;/em&gt;, 4 April 2018.&lt;/p&gt;
&lt;p&gt;[^2]:  Noam Scheiber, “How Uber Uses Psychological Tricks to Push Its Drivers’ Buttons”, &lt;em&gt;The New York Times&lt;/em&gt;, 2 April 2017.&lt;/p&gt;
&lt;p&gt;[^3]:  Isaac Chotiner, “When Your Boss Is an Algorithm”, Slate, 26 October 2018; Sarah Kessler, “How Uber Manages Drivers Without Technically Managing Drivers”, &lt;em&gt;Fast Company&lt;/em&gt;, 9 August 2016.&lt;/p&gt;
&lt;p&gt;[^4]:  Daniel Susser, Beate Roessler and Helen Nissenbaum, “Online Manipulation: Hidden Influences in a Digital World”, &lt;em&gt;Georgetown Law Technology Review&lt;/em&gt;, 2019.&lt;/p&gt;
&lt;p&gt;[^5]:  Veena Dubal, “On Algorithmic Wage Discrimination”, &lt;em&gt;Columbia Law Review&lt;/em&gt;, 2023.&lt;/p&gt;
&lt;p&gt;[^6]:  See: &lt;a href=&quot;https://www.irs.gov/businesses/small-businesses-self-employed/independent-contractor-defined&quot;&gt;https://www.irs.gov/businesses/small-businesses-self-employed/independent-contractor-defined&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;[^7]:  Veena Dubal, “The New Racial Wage Code”, &lt;em&gt;Harvard Law &amp;amp; Policy Review&lt;/em&gt;, 2022.&lt;/p&gt;
&lt;p&gt;[^8]:  Eric Gardner, “Here’s What Happened When We Put 7 Uber Drivers In the Same Room”, &lt;em&gt;More Perfect Union&lt;/em&gt;, 3 April 2024.&lt;/p&gt;
&lt;p&gt;[^9]:  Dubal, “On Algorithmic Wage Discrimination”.&lt;/p&gt;
&lt;p&gt;[^10]:  Dara Kerr, “Secretive Algorithm Will Now Determine Uber Driver Pay in Many Cities”, &lt;em&gt;The Markup&lt;/em&gt;, 1 March 2022.&lt;/p&gt;
&lt;p&gt;[^11]:  Zephyr Teachout, “Algorithmic Personalized Wages”, &lt;em&gt;Politics &amp;amp; Society&lt;/em&gt;, 2023.&lt;/p&gt;
&lt;p&gt;[^12]:  Eliza McCullough, “Prop 22 Provides Drivers with Inferior Benefits to Those Guaranteed to Employees”, 2022.&lt;/p&gt;
&lt;p&gt;[^13]:  See: https://aflcio.org/paywatch/UBER&lt;/p&gt;
&lt;p&gt;[^14]:  Teachout, “Algorithmic Personalized Wages”.&lt;/p&gt;
&lt;p&gt;[^15]:  For example, see this recent strike led by app-based drivers in the UK: Schannell Kanyora, “Inside private hire drivers&apos; strike: 18 hour shifts, passenger violence and unfair pay”, &lt;em&gt;The Mirror&lt;/em&gt;, 14 Feb 2026.&lt;/p&gt;
</content:encoded></item><item><title>Speculating Our Way Through Crisis</title><link>https://disjunctionsmag.com/articles/speculating-crisis</link><guid isPermaLink="true">https://disjunctionsmag.com/articles/speculating-crisis</guid><description>Overaccumulation, hegemony and AI hype</description><pubDate>Sun, 08 Feb 2026 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;It is no secret that investments in generative AI are through the roof. The scale is staggering: in 2025 alone, the so-called &lt;em&gt;Magnificent 7&lt;/em&gt; poured over $400 billion into AI infrastructure; Morgan Stanley estimate that global investment in data centres and hardware will approach $3 trillion by 2029, while OpenAI alone have secured contracts totalling more than $1 trillion.[^1] This is only the tip of the iceberg: today’s financial flows in the AI industry increasingly resemble the railroad expansions of the 19th century, well eclipsing the dot-com boom of the late 1990s. There is little doubt that we are witnessing one of the most significant reallocations of capital in modern economic history.&lt;/p&gt;
&lt;p&gt;There are, however, good reasons to believe that this enthusiasm rests on a fragile foundation. First, high revenue growth notwithstanding, these firms’ earnings remain but a fraction of cumulative investments.[^2] To date, there is no clear path to profitability for these vast, capital-intensive infrastructures, and little to suggest that such a path could even exist. Second, even if we put problems with short-term profitability aside, Silicon Valley appear to be caught in a circular investment loop.[^3] To wit, a triangular relationship has emerged between chip manufacturers, cloud providers, and AI firms, wherein actors like Nvidia invest heavily in AI firms and these firms, in turn, commit to purchasing vast amounts of hardware and renting cloud services from the same investors. This circularity artificially inflates revenues, creating the appearance of market depth that is underpinned by capital round-tripping, rather than by meaningful economic fundamentals.&lt;/p&gt;
&lt;p&gt;Productivity gains from generative AI, too, have remained modest.[^4] Improvements in core services remain elusive, as the brute-force scaling paradigm reaches a point of diminishing returns. This is driven by both the exhaustion of high-quality human data, and by the intrinsic architectural constraints of probabilistic modelling[^5] — constraints that will inhibit models from ever transcending the &lt;em&gt;stochastic parrot&lt;/em&gt; stage to become true, systematic reasoning engines.[^6]&lt;/p&gt;
&lt;p&gt;Thus far, AI’s economic contribution appears to have been largely confined to accelerating existing routine activities rather than opening up new sources of value creation. Building on earlier waves of mechanisation that transformed factory work, generative AI imposes industrial rhythms on knowledge work, forcing the logic of automation upon labour processes that are ostensibly cognitive or creative.[^7] It is based on this that claims about AI-enabled productivity gains are often justified. However, historically, sustained economic surges have required fundamental upheavals in production, and since generative AI is based on statistical repetition of past patterns, it remains a tool of imitation. While this recombination offers value to capital by streamlining output and degrading labour, it lacks the ability to effect the transformative discoveries and the opening of new spheres of value creation that would be necessary to justify current investment levels.[^8] The idea that AI can replace human creativity and innovation, which are necessary for transformative discoveries and the opening of new spheres of value creation, remains something of a fantasy.&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;Given the widening gap between investments and profits, an important question arises: why does capital continue to pour into AI? The answer lies less in some sort of collective misjudgement on the part of investors than in the laws of motion of capital itself. As Marx observed, the accumulation of capital periodically reaches a threshold where existing capital can no longer be reinvested without further depressing the rate of profit.[^9] And in recent years, we have indeed witnessed a steadily increasing concentration of liquidity in the hands of a few tech monopolies, who are now faced with limited productive investment opportunities. The AI bubble should thus be read as a symptom rather than a cause of this crisis of overaccumulation. Immense volumes of surplus capital, unable to find sufficiently profitable outlets after decades of globally stagnating profit rates, flood into these highly speculative projects, in the hopes that they might one day turn a profit.[^10] And since these investments represent claims on future profits that have yet to be (and, indeed, may never be) realised, they transform into fictitious capital, exacerbating the decoupling of assets on corporate balance sheets and in stock markets from real production. The current hype bubble serves only to accelerate the expansion of these fictitious claims, concealing the fundamentally crisis-prone nature of the current accumulation regime.&lt;/p&gt;
&lt;p&gt;The extent to which the U.S. economy has become tethered to this speculative cycle is striking. 40% of real GDP growth in the preceding year was driven by the capital expenditure offensives undertaken by major technology firms, directed almost exclusively toward the AI sector.[^11] Without this massive spending, the U.S. economy would likely have entered a period of stagnation, or even formal recession.[^12] These investments are largely backed by the American state, which is securing the conditions for accumulation in the AI economy via subsidies, strategic industrial policy, and technocratic legislation. A crash in this bubble — which appears to be imminent — would reverberate through the global financial system, through pension funds and international investment chains, calling into question the very economic foundations of U.S. hegemony. And since leading tech firms are now considered too big to fail, it is highly likely that the state would aim to stabilise them, to cushion the blow with massive liquidity support and interest rate cuts.&lt;/p&gt;
&lt;p&gt;Yet, importantly, this bubble’s collapse would not mean the end of generative AI or the downfall of Big Tech. Lest we forget — monopolistic tech giants like Google and Amazon were themselves forged in the flames of the dot-com crash. Crisis, in these contexts, primarily serves a market-clearing function; it wipes out significant portions of (inflated) market value, and concentrates power and capital with the few monopolies that survive. As such, crisis does not lead to a return to some much-vaunted capitalist competitive equilibrium, but rather helps entrench the technological and economic dominance of a few capitalists.&lt;/p&gt;
&lt;p&gt;At the end of the day, it is the working classes that will bear the brunt of capital’s excesses, in rising living costs and the loss of jobs and savings — as we witnessed after the subprime mortgage crisis. The risks intrinsic to capitalism itself will once again be socialised.&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;It is difficult, however, to characterise the current AI movement as a purely economic phenomenon. Indeed, the development and deployment of AI today are clearly driven by geopolitical forces, chief among which is the systemic rivalry between the United States and China. The American ruling classes believe in the strategic necessity of this technology for maintaining their hold over power. In accordance with this, what we are witnessing is a convergence between the profit motive and the American state’s project of global hegemony.[^13] This new geopolitical order is being shaped by struggles over code, data flows, semiconductor production, data centres, and related infrastructures.&lt;/p&gt;
&lt;p&gt;As trade barriers intensify and rival blocs coalesce, the AI race is emerging as a central instrument of modern imperialism, with technological supremacy determining future geopolitical and economic capacity.[^14] The United States is pursuing an offensive strategy, aiming to export its entire technology stack as a global standard through the so-called &lt;em&gt;AI Action Plan&lt;/em&gt;. Corporations like OpenAI, Oracle, and Microsoft have rapidly oriented themselves in line with Trump’s declaration that the US would “do whatever it takes” to lead the world in AI.[^15] And while the United States indulges in massive infrastructural investments to entrench its digital hegemony, the costs of these expansions are rerouted to the Global South. The ruthless extraction of lithium and cobalt systematically destroys local livelihoods and ecosystems, long before a single server is powered on. Furthermore, the “intelligence” of these models is built upon the invisible labour of a vast workforce in the semi-periphery, where workers are paid subsistence wages to label data and moderate content. This creates a deepening technological dependency for semi-peripheral nations, relegating their role in global capitalism to that of raw material suppliers or captive markets for Western cloud monopolies, as they are then forced to lease back the very technologies that were built upon their own raw materials and labour.[^16] By controlling data centres, cloud infrastructure, and undersea cabling, tech corporations are appropriating the digital economy’s means of production and circulation, while simultaneously serving as the “eyes and ears” of the security apparatus — both overseas and at home.[^17] Infrastructure becomes the material foundation of geopolitical control, as the entire world — outside of China — is pushed further and further into dependency.&lt;/p&gt;
&lt;p&gt;The U.S. state’s alignment with Big Tech extends into the battlefield.[^18] The integration of artificial intelligence marks a turning point in U.S. defence strategy. While software has long automated military logistics, the current revolution lies in algorithmic autonomy: the ability for cheap, disposable machines to operate in swarms without direct human piloting. Through initiatives such as &lt;em&gt;Replicator&lt;/em&gt;, the Pentagon is increasingly relying on mass-produced, autonomous drone swarms. Other initiatives include &lt;em&gt;Project Maven&lt;/em&gt; for automated target recognition and &lt;em&gt;Joint All-Domain Command and Control&lt;/em&gt; for networked operations management.&lt;/p&gt;
&lt;p&gt;The global periphery has already become a testing ground for these lethal innovations. In Ukraine, Palantir has deployed AI‑enabled software that integrates satellite imagery, drone footage, and battlefield reporting to support targeting decisions and present military options in near-real time, compressing what used to take hours into minutes and thereby shortening the “kill chain” on the battlefield.[^19] Meanwhile, in Gaza, Israel’s &lt;em&gt;Lavender&lt;/em&gt; database uses behavioural and social metadata for large-scale targeting and execution. This combination of low-cost autonomous drones and pervasive data extraction is redefining modern warfare.[^20]&lt;/p&gt;
&lt;p&gt;Ultimately, this shift reveals the destructive maturity of the current global order. Technology is being instrumentalised, not to liberate humanity, but to refine the mechanisms of exploitation and warfare. Under the imperatives of capital, artificial intelligence is a tool of digital barbarism, cementing global dependencies and preemptively stifling any form of resistance. Yet it need not remain this way.&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;The current mania surrounding artificial intelligence does not resolve any of the fundamental contradictions of the capitalist mode of production: the growing concentration of wealth; the decline in the rate of profit; the overaccumulation of unproductive capital, and the intensified rivalry between imperialist powers that accompanies all of the above. Yet, tech elites have successfully imposed their vision of society, the future, and progress, as if no alternative could exist — or even be desirable. It is essential to shatter this narrative while avoiding a retreat into technophobia. Our task, rather, should be to free technology from the constraints imposed by capital.&lt;/p&gt;
&lt;p&gt;What, then, is at stake? As argued above, Big Tech’s infrastructure now functions as a core instrument of geopolitical power, and its ownership determines both who captures value and who shapes the future direction of technological development. In addition, digital technologies have not eliminated labour as much as concealed it. The tech industry rests on the collective work of millions — from programmers in imperial metropolises to precarious workers in the global South mining rare earth minerals for starvation wages — making questions of ownership and control of technology inseparable from the terrain of contemporary class struggle.&lt;/p&gt;
&lt;p&gt;Ultimately, reformist demands to increase taxation or break up monopolies are mere palliatives for a deeply broken system, since they leave the core problem of private investment and planning untouched, merely seeking to mitigate its worst excesses. Fundamental change requires the expropriation of Big Tech: not as an end in itself, but as a lever for a broader socialist transition that radically reorients how technology and its infrastructures are created and used; liberating them from their subordination to capital, and bringing them under the democratic control and management of those who develop, use, and are affected by them — workers and society as a whole.&lt;/p&gt;
&lt;p&gt;The ground is shifting. As imperialist rivalries intensify and the global struggle for technological dominance sharpens, the AI bubble stands on the verge of a collapse that will profoundly undermine public faith in the current system. However, we cannot sit idle while Big Tech uses the chaos to consolidate its power. It is essential that we lay the organisational and political foundations today. We need bold visions that challenge the logics of private profit and extraction, replacing them with a framework of collective utility — through democratising economic planning and ensuring, ultimately, that technology functions as a public resource dedicated to meeting human needs and fostering global solidarity.[^21]&lt;/p&gt;
&lt;p&gt;By organising around transitional demands, we need to connect immediate struggles — job losses, price hikes, and anti-militarisation efforts — to the fundamental contradictions of capitalism, exposing the deep conflict between collective labour and private ownership. The growing scepticism, fear, and resistance surrounding AI and the tech behemoths should also be seen as a political opportunity to dismantle the myth of capitalist inevitability. It is only by grounding our movement in demands that transcend the limits of reformism that we can lay the foundation for a future where technology serves the needs of society as a whole, rather than the imperatives of capital.&lt;/p&gt;
&lt;p&gt;We demand nothing less than sovereignty over the tools that shape our future.&lt;/p&gt;
&lt;p&gt;[^1]:  Rolfe Winkler, Nate Rattner, and Sebastian Herrera, “Big Tech’s $400 Billion AI Spending Spree Just Got Wall Street’s Blessing”, &lt;em&gt;The Wall Street Journal&lt;/em&gt;, 31 July 2025; Andrew Sheets, “Who Will Fund AI’s $3 Trillion Ask?”, Morgan Stanley; Tabby Kinder and George Hammond, “OpenAI’s computing deals top $1tn”, &lt;em&gt;Financial Times&lt;/em&gt;, 7 October 2025.&lt;/p&gt;
&lt;p&gt;[^2]:  OpenAI, for instance, are not expected to turn a profit until at least 2029. See: Bailey Lipschultz and Shirin Ghaffary, “OpenAI Expects Revenue Will Triple to $12.7 Billion This Year”, &lt;em&gt;Bloomberg&lt;/em&gt;, 26 March 2025.&lt;/p&gt;
&lt;p&gt;[^3]:  Emily Forgash and Agnee Ghosh, “OpenAI, Nvidia Fuel $1 Trillion AI Market With Web of Circular Deals”, &lt;em&gt;Bloomberg&lt;/em&gt;, 7 October 2025.&lt;/p&gt;
&lt;p&gt;[^4]:  A recent MIT study of 300 publicly announced AI initiatives found that 95% failed to increase profitability. See: “The GenAI Divide: State of AI in Business 2025”, &lt;em&gt;MIT NANDA&lt;/em&gt;. McKinsey have reported similar results: in a survey of companies deploying generative AI, over 80% saw no measurable impact on earnings. See: “Seizing the agentic AI advantage”, &lt;em&gt;McKinsey &amp;amp; Company.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;[^5]:  High-quality training data is projected to plateau between 2026 and 2032, creating a potential bottleneck for scaling language models. See: Pablo Villalobos &lt;em&gt;et al.&lt;/em&gt;, “Position: will we run out of data? limits of LLM scaling based on human-generated data”, &lt;em&gt;Proceedings of the 41st International Conference on Machine Learning&lt;/em&gt;, 2024. Using synthetic data to compensate is also no solution, since models trained on their own generations progressively lose information and experience functional decay. See: Ilia Shumailov &lt;em&gt;et al.&lt;/em&gt;, “AI models collapse when trained on recursively generated data”, &lt;em&gt;Nature&lt;/em&gt;, 2024. For an analysis of the constraints of the transformer architecture, see:  Dieuwertje Luitse and Wiebke Denkena, “The Great Transformer: Examining the Role of Large Language Models in the Political Economy of AI”, &lt;em&gt;Big Data &amp;amp; Society&lt;/em&gt;, 2021.&lt;/p&gt;
&lt;p&gt;[^6]:  Emily M. Bender, Timnit Gebru, Angelina McMillan‑Major, and Shmargaret Shmitchell, “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?”, &lt;em&gt;Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency&lt;/em&gt;, 2021. For a discussion on the scaling-dependent “emergent” abilities of language models, see: Jason Wei et al., “Emergent Abilities of Large Language Models”, &lt;em&gt;arXiv&lt;/em&gt;, 2022; Rylan Schaeffer, Brando Miranda, and Sanmi Koyejo, “Are Emergent Abilities of Large Language Models a Mirage?”, &lt;em&gt;Advances in Neural Information Processing Systems&lt;/em&gt;, 2023.&lt;/p&gt;
&lt;p&gt;[^7]:  Vinit Ravishankar and Mostafa Abdou, “The Rise and Fall of the Knowledge Worker”, &lt;em&gt;Jacobin&lt;/em&gt;, 2025.&lt;/p&gt;
&lt;p&gt;[^8]:  This position is not unpopular with market liberals: Nobel laureate Daron Acemoglu acknowledges, for instance, that forecasts for economic growth from AI are likely to be far smaller than projections imply. See: Daron Acemoglu, “The Simple Macroeconomics of AI”, 2024.&lt;/p&gt;
&lt;p&gt;[^9]:  Karl Marx, &lt;em&gt;Capital, Vol III.&lt;/em&gt; ch. 25, 1894.&lt;/p&gt;
&lt;p&gt;[^10]:  Aaron Benanav, &lt;em&gt;“Automation and the Future of Work”&lt;/em&gt;, 2020; Guglielmo Carchedi and Michael Roberts, &lt;em&gt;“Capitalism in the 21st Century: Through the Prism of Value”&lt;/em&gt;, 2022.&lt;/p&gt;
&lt;p&gt;[^11]:  Ruchir Sharma, “America is now one big bet on AI”, &lt;em&gt;Financial Times&lt;/em&gt;, 6 October 2025.&lt;/p&gt;
&lt;p&gt;[^12]:  Deutsche Bank Research, “The world economy is in a few people&apos;s hands”, 24 September 2025.&lt;/p&gt;
&lt;p&gt;[^13]:  Nick Dyer-Witheford and Alessandra Mularoni, “Cybernetic Circulation Complex. Big Tech and Planetary Crisis”, 2025.&lt;/p&gt;
&lt;p&gt;[^14]:  Nick Srnicek, “Silicon Empires: The Fight for the Future of AI”, 2025.&lt;/p&gt;
&lt;p&gt;[^15]:  Volker Briegleb, “Trump: ‘America will win the AI race’”, &lt;em&gt;heise online&lt;/em&gt;, 24 July 2025.&lt;/p&gt;
&lt;p&gt;[^16]:  Michael Kwet, &lt;em&gt;“Digital Degrowth: Technology in the Age of Survival”&lt;/em&gt;, 2024.&lt;/p&gt;
&lt;p&gt;[^17]:  Alessandro Coveri, Claudia Cozza, and Davide Guarascio, “Blurring Boundaries: An Analysis of the Digital Platforms-Military Nexus.” &lt;em&gt;Review of Political Economy&lt;/em&gt;, 2024.&lt;/p&gt;
&lt;p&gt;[^18]:  Dario Guarascio, Andrea Coveri and Claudio Cozza, “Big Tech and the US Digital-Military-Industrial Complex”, &lt;em&gt;Intereconomics&lt;/em&gt;, 2025.&lt;/p&gt;
&lt;p&gt;[^19]:  Vera Bergengruen, “How Tech Giants Turned Ukraine Into an AI War Lab”, &lt;em&gt;TIME&lt;/em&gt;, 8 February 2024.&lt;/p&gt;
&lt;p&gt;[^20]:  Yuval Abraham, “‘Lavender’: The AI machine directing Israel’s bombing spree in Gaza”, &lt;em&gt;+972 Magazine&lt;/em&gt;, 3 April 2024&lt;/p&gt;
&lt;p&gt;[^21]:  Martín Schapiro and Gerónimo Pelli, “Algorithmen für Alle: Künstliche Intelligenz im Sozialismus”, &lt;em&gt;Klasse Gegen Klasse&lt;/em&gt;, 2025.&lt;/p&gt;
</content:encoded></item><item><title>A Global Labour Regime for Data Work?</title><link>https://disjunctionsmag.com/articles/global-labour-regime</link><guid isPermaLink="true">https://disjunctionsmag.com/articles/global-labour-regime</guid><description>Exploitation, control, and their convergences across geographies</description><pubDate>Wed, 04 Feb 2026 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;The vast networks of global build-outs necessary for the production of digital technologies have prompted extensive discussions on the infrastructural power of digital capital. These debates — often characterising these infrastructures as “megamachines”, or as a “planetary stacking order” — aim to clarify how digital capitalists have gained greater control over society through an increasing concentration of power and capital.[^1]&lt;/p&gt;
&lt;p&gt;Such scale of build-outs have led scholars to argue that the “platform capitalism” of the 2010s is now giving way to a new “era of AI”.[^2] While the age of platform capitalism was characterised by social media companies generating profits through network effects, cross-subsidisation, and micro-targeted advertising, the AI era can be seen as increasingly dominated by “Big AI” companies. These companies — OpenAI, Anthropic, Cohere, Nvidia, and the like — benefit from infrastructural power and control over the architecture necessary to produce AI systems.&lt;/p&gt;
&lt;p&gt;This apparent concentration of power has also sparked a parallel debate over whether or not we are witnessing a transition to a new mode of production: a form of &lt;em&gt;techno-feudalism&lt;/em&gt;, where tech-conglomerates increasingly rely upon speculative financial valuation and rent-extraction in order to maximise profits.[^3] While speculation and rent extraction can indeed serve as sources of profit for tech companies, what is often overlooked is the fact that the geographically dispersed infrastructure that undergirds machine learning also relies upon the exploitation of diverse categories of workers, within the international division of digital labour.[^4] Indeed, the global supply chains of machine learning systems themselves include the work of software engineers and data scientists; the preparation and verification of datasets by human annotators; the free labour of digital media users; the manual assembly of hardware; and the mining and refining of so-called critical rare-earth minerals.[^5]&lt;/p&gt;
&lt;p&gt;Particularly with respect to the data required to train machine learning models, these global networks of production involve vast amounts of labour, paid or otherwise.[^6] In this light, Antonio Casilli argues in his recent book, &lt;em&gt;Waiting for Robots&lt;/em&gt;, data work can be viewed as the “basic and constant form” of platform work writ large.[^7] “Data work” — an umbrella term for data annotation, verification, content moderation, and so on — tends to occur under the auspices of business process outsourcing companies (BPOs), where workers work in physical office settings; or through digital labour platforms (DLPs), where work can be performed from anywhere with a laptop and internet connection.[^8] And as the hype bubble around AI nears its peak, the need for human data preparation and verification has only grown, prompting large technology companies to outsource increasing amounts of data work through these subcontractors.[^9]&lt;/p&gt;
&lt;p&gt;Proclamations of a novel era of AI development in global capitalism may thus be overstated. The strategies for capital accumulation and managerial control that AI firms and their outsourcing partners deploy draw upon platform capitalism’s organisational and management models.[^10] And these models, too, did not emerge out of nowhere: rather, they followed from the specific lineages of the logistics revolution, and digital service work in call centres.[^11]&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;The global population of data workers has rightly been theorised as a reserve army of labour, since they constitute an un- and under-employed workforce, intermittently drawn into the digital economy’s circuits of value accumulation.[^12] In the context of digital labour platforms, the exploitation of this reserve army occurs through a digitally mediated “planetary labour market”.[^13] The word &lt;em&gt;planetary&lt;/em&gt; here does not imply that geographical differences become irrelevant — rather, it signifies how these differences are strategically leveraged by tech companies, through labour arbitrage and cross-border competition. Even in a highly digitised, global labour market, exploitation is always asymmetrically embedded in specific geographical locations, and conditioned by political-economic, environmental, and cultural factors.[^14]&lt;/p&gt;
&lt;p&gt;Analysis of digital labour platforms through the lens of labour process theory have tended to elide precisely these socio-political contexts.[^15] &lt;em&gt;Labour regime analysis&lt;/em&gt; provides a useful alternative theoretical lens here, linking micro-level antagonisms in the production process to macro-level dynamics in the global economy.[^16] It extends labour process theory’s narrow focus on the immediate process of production to encompass surrounding institutional arrangements, geographically anchoring the specificity of determinate labour processes within their local contexts.&lt;/p&gt;
&lt;p&gt;Through the lens of labour regime analysis, then, we can begin to describe labour regimes as “invisible infrastructures” that mobilize workers for production, simultaneously extending and intensifying work within the labour process itself.[^17] Such infrastructures are made up of competitive pressures in the global economy; international regulations such as trade agreements; national organizations and institutions such as trade union federations and social protection systems; and modes of social reproduction, at the household level. Together, these components of labour regimes compel workers to exert greater labour power in the production process, all the while accepting wages and working conditions that are amenable to capital&apos;s dictates. Labour regimes also express specific underlying logics, or operational rationalities, that can adapt to different social, political, and legal context.[^18] These underlying logics manifest themselves on online platforms in the form of algorithmic work allocation; digital tracking and monitoring; rating systems; independent contractor status; and legal and regulatory arbitrage — together forming  a burgeoning “platform management model”.[^19]&lt;/p&gt;
&lt;p&gt;These manifestations of platform capitalism are closely tied to historical shifts in the global economy: from de-industrialisation and rising unemployment, to the proliferation of precarious and flexible forms of work.[^20] The advent of the gig economy in the global North epitomises these shifts, reflecting a restructuring of work in line with the long-standing quotidian reality of informal workers in the global South.[^21] We can, thus, view this platform management model as an instance of neoliberal globalisation, in which capital’s expansionary logic has made the relations of production more precarious for workers in both the global South and the North.[^22]&lt;/p&gt;
&lt;p&gt;It is important here, however, to guard against accounts of capitalist expansion in the global South as unilinear or homogeneising, and against treating their contexts as underspecified “elsewheres”.[^23] Theorising a single global labour regime risks overlooking the geographically anchored institutional arrangements that regulate local labour markets, framing workers in the global South as passive victims that remain entirely under the control of capital and the state.[^24] Yet — geographic divergences notwithstanding — we can still identify shared features and tendencies, and conceive of a global labour control regime as a constellation of dynamics and mechanisms that intensify the exploitation of data workers globally. Indeed, in contrast with celebratory but superficial accounts of worker agency, understanding these common tendencies can provide concrete avenues for eliminating constraints and obstacles to the realistic exercise of this agency.&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;How can we characterise the commonalities and differences between the positions and experiences of data workers across the world today? DLPs and BPOs — where most annotation labour takes place — have traditionally been analysed as separate types of businesses. Whereas the former have been conceived as a “planetary labour market” — since anyone with an internet connection can log onto them and begin performing tasks — the latter are seen more as traditional companies.[^25] Yet, there is abundant evidence that workers are exposed to similar forms of exploitation and managerial control across both types of enterprises, an expression of the convergent tendencies emerging in the global market for outsourced data annotation services.&lt;/p&gt;
&lt;p&gt;For instance, both BPOs and DLPs employ a management model in which workers are integrated into teams composed of data workers (euphemistically referred to as “associates” or “agents”), quality analysts, team leads, and project managers.[^26] In this setup, data workers send their completed tasks to the QAs, who review the annotations and send their reviews to the team leaders and project managers, after which monthly or weekly spreadsheets of performance ratings are produced.[^27] In both BPOs and DLPs, this review process is supported by algorithmic management systems that monitor workers’ performance metrics: accuracy, efficiency, productivity, occupancy, and so on.[^28] And inspired by DLPs, some BPOs also now employ reputational systems, in which workers&apos; ratings are visible, creating a highly competitive “gamified” environment. All of this is reinforced by the general oversupply of data workers, and by managerial demands for faster and more accurate annotation.[^29]&lt;/p&gt;
&lt;p&gt;The Taylorist logics applied here — through the quantification of the labour process, and the production of conditions in which humans are compelled to behave more as machines — also reflect the instrumental rationality prevalent in digital capitalism in a wider sense.[^30] And in terms of management style, the combination of machinic and human management are deeply reminiscent of the call-centre management model.[^31] Such convergences have led to the emergence of more and more hybrid organisations, that present themselves as DLPs, while integrating forms of managerial supervision and discipline that we normally find in conventional companies.[^32]&lt;/p&gt;
&lt;p&gt;Of course, there are also obvious differences between working for a BPO and on a DLP, as well as differences between specific platforms or companies.[^33] Standard DLPs tend to offer piece-rate remuneration to workers, classifying them as independent contractors; BPOs, on the other hand, tend to offer full-time (yet temporary or short-term) contracts, with monthly salaries and employer contributions to health insurance schemes.[^34] Differences such as these can lead to considerable fragmentation amongst workers, according to the informality of their working arrangements. In and of itself, this fragmentation serves as a controlling device in the labour process.[^35]&lt;/p&gt;
&lt;p&gt;Yet, ultimately, as digital capitalism’s ambit expands globally, competition drives capitalists to adopt one another’s structures and techniques to control and exploit workers more effectively. It is, paradoxically, precisely this tendency towards convergence that can create opportunities for a collective workers’ consciousness, helping workers find common ground and outrun capitalist control.&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;On a broader scale, particularly when comparing data workers across the global South and the North, major differences begin to emerge.[^36] After all, working on online platforms from the slums of Kibera or Mathare in Nairobi presents a rather different reality to doing platform work to supplement your income as a student in Copenhagen. We know, for instance, that a higher proportion of workers in the global South report that online data work is their primary source of income, and that these workers perform twice the amount of unpaid work on platforms compared to those in the global North. And on average, they earn only half the amount that their Northern counterparts do, despite having higher educational levels.[^37] This shows how the production of machine learning systems reinforces the uneven geographies of extraction produced by historical developments in global capitalism — thus making it inseparable from the legacies of colonial domination of human labour in the global South by capitalist enterprises in the global North.[^38]&lt;/p&gt;
&lt;p&gt;Even though the past decades have witnessed a broader global convergence towards platform management, we can still trace a distinct trajectory followed specifically by labour control regimes for data workers in the global South. While the exact form the exploitation of data workers takes might differ across geographies, different fractions of capital do adopt similar methods and tools to perfect this exploitation. In the global South, we observe the most pronounced convergence toward such shared characteristics because the socio-economic and political conditions here are most conducive to the extension and intensification of data worker exploitation.&lt;/p&gt;
&lt;p&gt;All of this makes it profoundly important to take seriously the political-economic and social contexts in which data work is carried out. Broadly similar conditions and dynamics are evident across various geographies in the global South, where high unemployment rates result in a chronic oversupply of labour. This — combined with large informal economies, pervasive poverty, limited access to social protections, and weak or corrupt political institutions — creates the perfect set of incentives for companies to outsource data work.[^39] Within a &lt;em&gt;global data work labour control regime&lt;/em&gt;, then, we can observe the emergence of a common set of features. Workers are subjected to human and algorithmic monitoring, evaluation, and discipline; their pay rates are variable, calibrated to both the volume and quality of work performed; unpaid labour proliferates; information asymmetries are entrenched; and workforces dynamically expand and contract in response to client demand.[^40] These features overlap with more locally determined mechanisms that use the oversupply of labour to extract more labour power — such as regulatory arbitrage across geographical regions according to levels of labour protection, or non-disclosure agreements and anti-mobilisation clauses in contracts.&lt;/p&gt;
&lt;p&gt;Understanding these general tendencies in data work labour regimes can elucidate the dynamics through which the contemporary, digitised capitalist totality reproduces and entrenches its power structures. Strategically speaking, it can help foreground the shared interests of digital workers within the global economy, and identify choke points in the global supply chains of machine learning systems — where strikes or sabotage could disrupt the accumulation of capital. Ultimately, a focus on how labour regimes intensify the exploitation of data workers across contexts enables us to map the obstacles and constraints that limit the exercise of collective agency, and to chart a path towards dismantling the hellscape that is digital capitalism.&lt;/p&gt;
&lt;p&gt;[^1]:  Kate Crawford, &lt;em&gt;Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence&lt;/em&gt;, 2021; Florian A. Schmidt, “The Planetary Stacking Order of Multilayered Crowd-AI Systems”. In Mark Graham and Fabian Ferrari, eds., &lt;em&gt;Digital Work in the Planetary Market&lt;/em&gt;, 2022; Antonio Casilli and Julian Posada, “The Platformization of Labor and Society”. In &lt;em&gt;Society and the Internet: How Networks of Information and Communication Are Changing Our Lives&lt;/em&gt;, 2019.&lt;/p&gt;
&lt;p&gt;[^2]:  James Muldoon, Callum Cant, and Mark Graham, &lt;em&gt;Feeding the Machine: The Hidden Human Labour Powering AI&lt;/em&gt;, 2025; Paul Langley and Andrew Leyshon, “Platform Capitalism: The Intermediation and Capitalisation of Digital Economic Circulation”, &lt;em&gt;Finance and Society&lt;/em&gt;, 2017.&lt;/p&gt;
&lt;p&gt;[^3]:  Cédric Durand, &lt;em&gt;How Silicon Valley Unleashed Techno-Feudalism: The Making of the Digital Economy&lt;/em&gt;, 2024; Evgeny Morozov, “Critique of Techno-Feudal Reason”, &lt;em&gt;New Left Review&lt;/em&gt;, 2022.&lt;/p&gt;
&lt;p&gt;[^4]: Christian Fuchs, &lt;em&gt;Digital Capitalism&lt;/em&gt;, 2022; Greig Charnock and Ramon Ribera-Fumaz, “What’s Talent Got to Do with It? The Collective Labourer and the Rise of Barcelona’s Digital Economy”, &lt;em&gt;Antipode&lt;/em&gt;, 2024.&lt;/p&gt;
&lt;p&gt;[^5]:  Kerry Holden and Matthew Harsh, “On Pipelines, Readiness and Annotative Labour: Political Geographies of AI and Data Infrastructures in Africa”, &lt;em&gt;Political Geography&lt;/em&gt;, 2024.&lt;/p&gt;
&lt;p&gt;[^6]:  James Muldoon, Callum Cant, Boxi A. Wu, and Mark Graham, “A Typology of Artificial Intelligence Data Work”, &lt;em&gt;Big Data &amp;amp; Society&lt;/em&gt;, 2024; Lorenzo Cini, “How Algorithms Are Reshaping the Exploitation of Labour-Power: Insights into the Process of Labour Invisibilization in the Platform Economy”, &lt;em&gt;Theory and Society&lt;/em&gt;, 2023.&lt;/p&gt;
&lt;p&gt;[^7]:  Antonio A. Casilli, &lt;em&gt;Waiting for Robots: The Hired Hands of Automation&lt;/em&gt;, 2025.&lt;/p&gt;
&lt;p&gt;[^8]:  Paola Tubaro, Antonio A. Casilli, and Marion Coville, “The Trainer, the Verifier, the Imitator: Three Ways in Which Human Platform Workers Support Artificial Intelligence”, &lt;em&gt;Big Data &amp;amp; Society&lt;/em&gt;, 2020.&lt;/p&gt;
&lt;p&gt;[^9]:  Examples include Alphabet’s &lt;em&gt;Raterhub&lt;/em&gt; and &lt;em&gt;Crowdsource&lt;/em&gt; platforms, or Microsoft’s internal &lt;em&gt;Universal Human Relevance System&lt;/em&gt; (UHRS) platform. OpenAI and Meta have outsourced content moderation and model output verification tasks to low-wage Kenyan data workers through the BPO &lt;em&gt;Samasource&lt;/em&gt;. See: International Labour Organization, &lt;em&gt;Digital Labour Platforms in Kenya: Exploring Women’s Opportunities and Challenges Across Various Sectors&lt;/em&gt;, 2024.&lt;/p&gt;
&lt;p&gt;[^10]:  Ursula Huws, “Where Did Online Platforms Come From? The Virtualization of Work Organization and the New Policy Challenges it Raises”. In Pamela Meil, Vassil Kirov, eds., &lt;em&gt;Policy Implications of Virtual Work&lt;/em&gt;, 2017; Jamie Woodcock, “Artificial intelligence at work: The problem of managerial control from call centers to transport platforms”, &lt;em&gt;Frontiers in Artificial Intelligence&lt;/em&gt;, 2022.&lt;/p&gt;
&lt;p&gt;[^11]:  Sandro Mezzadra and Brett Neilson, “Operations of Platforms: A Global Process in a Multipolar World”. In Sandro Mezzadra et al., eds., &lt;em&gt;Capitalism in the Platform Age: Emerging Assemblages of Labour and Welfare in Urban Spaces&lt;/em&gt;, 2024.&lt;/p&gt;
&lt;p&gt;[^12]:  Patrizia Zanoni and Frederick Harry Pitts, “Inclusion Through the Platform Economy? The ‘Diverse’ Crowd as Relative Surplus Populations and the Pauperisation of Labour”. In &lt;em&gt;The Routledge Handbook of the Gig Economy&lt;/em&gt;, 2022.&lt;/p&gt;
&lt;p&gt;[^13]:  Mohammad Amir Anwar and Mark Graham, “The Global Gig Economy: Towards a Planetary Labour Market?”, First Monday, 2019.&lt;/p&gt;
&lt;p&gt;[^14]:  Mohammad Amir Anwar, Susann Schäfer, and Slobodan Golušin, “Work Futures: Globalization, Planetary Markets, and Uneven Developments in the Gig Economy”, &lt;em&gt;Globalizations&lt;/em&gt;, 2024.&lt;/p&gt;
&lt;p&gt;[^15]:  Alessandro Gandini, “Labour Process Theory and the Gig Economy”, &lt;em&gt;Human Relations&lt;/em&gt;, 2019; Simon Joyce and Mark Stuart, “Digitalised Management, Control and Resistance in Platform Work: A Labour Process Analysis”. In Julieta Haidar and Maarten Keune, eds., &lt;em&gt;Work and Labour Relations in Global Platform Capitalism&lt;/em&gt;, 2021.&lt;/p&gt;
&lt;p&gt;[^16]:  Jamie Peck, “Modalities of Labour: Restructuring, Regulation, Regime”. In Elena Baglioni et al., eds., &lt;em&gt;Labour Regimes and Global Production&lt;/em&gt;, 2022.&lt;/p&gt;
&lt;p&gt;[^17]:  Elena Baglioni, Liam Campling, Alessandra Mezzadri, Satoshi Miyamura, Jonathan Pattenden, and Benjamin Selwyn, “Exploitation and Labour Regimes: Production, Circulation, Social Reproduction, Ecology”. In Elena Baglioni et al., eds., &lt;em&gt;Labour Regimes and Global Production&lt;/em&gt;, 2022.&lt;/p&gt;
&lt;p&gt;[^18]:  Mezzadra &amp;amp; Neilson, “Operations of Platforms”; Maurilio Pirone, “Out of the Standard: Towards a Global Approach to Platform Labour”. In Sandro Mezzadra et al., eds., &lt;em&gt;Capitalism in the Platform Age: Emerging Assemblages of Labour and Welfare in Urban Spaces&lt;/em&gt;, 2024.&lt;/p&gt;
&lt;p&gt;[^19]:  Phoebe V. Moore and Simon Joyce, “Black Box or Hidden Abode? The Expansion and Exposure of Platform Work Managerialism”, &lt;em&gt;Review of International Political Economy&lt;/em&gt;, 2020.&lt;/p&gt;
&lt;p&gt;[^20]:  Jamie Woodcock and Mark Graham, &lt;em&gt;The Gig Economy: A Critical Introduction&lt;/em&gt;, 2020.&lt;/p&gt;
&lt;p&gt;[^21]:  Alessandra Mezzadri, “Social Reproduction, Labour Exploitation and Reproductive Struggles for a Global Political Economy of Work”. In Mauro Atzeni et al., eds., &lt;em&gt;Handbook of Research on the Global Political Economy of Work&lt;/em&gt;, 2023.&lt;/p&gt;
&lt;p&gt;[^22]:  Kevan Harris and Phillip A. Hough, “Labour Regimes, Social Reproduction and Boundary‑Drawing Strategies Across the Arc of US World Hegemony”. In Elena Baglioni et al., eds., &lt;em&gt;Labour Regimes and Global Production&lt;/em&gt;, 2022.&lt;/p&gt;
&lt;p&gt;[^23]:  Philip F. Kelly, “The Political Economy of Local Labor Control in the Philippines”, &lt;em&gt;Economic Geography&lt;/em&gt;, 2001.&lt;/p&gt;
&lt;p&gt;[^24]:  Neethi P., “Globalization Lived Locally: Investigating Kerala’s Local Labour Control Regimes”, &lt;em&gt;Development and Change&lt;/em&gt;, 2012.&lt;/p&gt;
&lt;p&gt;[^25]:  Mark Graham and Mohammad Amir Anwar, “The Global Gig Economy: Towards a Planetary Labour Market?”, &lt;em&gt;First Monday&lt;/em&gt;, 2019.&lt;/p&gt;
&lt;p&gt;[^26]:  Milagros Miceli, Martin Schuessler, and Tianling Yang, “Between Subjectivity and Imposition: Power Dynamics in Data Annotation for Computer Vision”, &lt;em&gt;Proceedings of the ACM on Human-Computer Interaction&lt;/em&gt;, 2020; Milagros Miceli, Julian Posada, and Tianling Yang, “Studying Up Machine Learning Data: Why Talk About Bias When We Mean Power?”, &lt;em&gt;Proceedings of the ACM on Human-Computer Interaction&lt;/em&gt;, 2022.&lt;/p&gt;
&lt;p&gt;[^27]:  Srravya Chandhiramowuli and Bidisha Chaudhuri, “Match Made by Humans: A Critical Enquiry into Human‑Machine Configurations in Data Labelling”, &lt;em&gt;Proceedings of the 56th Hawaii International Conference on System Sciences&lt;/em&gt;, 2023; Bidisha Chaudhuri and Srravya Chandhiramowuli, “Tracing the Displacement of Data Work in AI: A Political Economy of ‘Human‑in‑the‑Loop’”, &lt;em&gt;Engaging Science, Technology, and Society&lt;/em&gt;, 2024.&lt;/p&gt;
&lt;p&gt;[^28]:  James Muldoon, Callum Cant, Mark Graham, and Funda Ustek‑Spilda, “The Poverty of Ethical AI: Impact Sourcing and AI Supply Chains”, &lt;em&gt;AI and Society&lt;/em&gt;, 2023.&lt;/p&gt;
&lt;p&gt;[^29]:  Alex J. Wood and Vili Lehdonvirta, “Platforms Disrupting Reputation: Precarity and Recognition Struggles in the Remote Gig Economy”, &lt;em&gt;Sociology&lt;/em&gt;, 2023; Agnieszka Piasna, “Algorithms of Time: How Algorithmic Management Changes the Temporalities of Work and Prospects for Working Time Reduction”, &lt;em&gt;Cambridge Journal of Economics&lt;/em&gt;, 2024; Srravya Chandhiramowuli, Alex S. Taylor, Sara Heitlinger, and Ding Wang, “Making Data Work Count”, &lt;em&gt;Proceedings of the ACM on Human‑Computer Interaction&lt;/em&gt;, 2024.&lt;/p&gt;
&lt;p&gt;[^30]:  Jernej Amon Prodnik, “Algorithmic Logic in Digital Capitalism”. In Pieter Verdegem, ed., &lt;em&gt;AI for Everyone? Critical Perspectives&lt;/em&gt;, 2021; Moritz Altenried, “The Platform as Factory: Crowdwork and the Hidden Labour Behind Artificial Intelligence”, &lt;em&gt;Capital and Class&lt;/em&gt;, 2020.&lt;/p&gt;
&lt;p&gt;[^31]:  In Kenya, companies such as CloudFactory operate their own online platforms, where taking screenshots and surveilling remote data workers via their laptop webcams is not uncommon. Other platforms, such as the Computer Vision Annotation Tool (CVAT), provide companies with digital means to outsource data annotation, allowing them to integrate workers into their own teams of QAs and team leads, with communication occurring through Telegram, Signal, or Slack channels. Client companies can thus introduce more direct human management and supervision through digital means, circumventing issues related to data quality and security that are associated with outsourcing to an anonymous, global crowd of data workers. See: Muldoon et al., &quot;A Typology of Artifical Intelligence Data Work&quot;.&lt;/p&gt;
&lt;p&gt;[^32]:  Clément Le  Ludec, Maxime Cornet, and Antonio A. Casilli, “The Problem with Annotation: Human Labour and Outsourcing Between France and Madagascar”, &lt;em&gt;Big Data &amp;amp; Society&lt;/em&gt;, 2023. An example is the DLP Remotasks, which had several physical offices in Nairobi, Nakuru, and Thika — combining features of the BPO and platform management model.&lt;/p&gt;
&lt;p&gt;[^33]:  In the Kenyan context, there are also more standard, “pure” online platforms, such as Hive AI, as well as BPOs that do not rely on remote work via online platforms.&lt;/p&gt;
&lt;p&gt;[^34]:  Some BPOs also pay workers in cash to avoid paying taxes and social protection.&lt;/p&gt;
&lt;p&gt;[^35]:  Nikolaus Hammer and Lone Riisgaard, “Labour and Segmentation in Value Chains”. In Kirsty Newsome et al., eds., &lt;em&gt;Putting Labour in its Place: Labour Process Analysis and Global Value Chains&lt;/em&gt;, 2015.&lt;/p&gt;
&lt;p&gt;[^36]:  Mark Graham, Isis Hjorth, and Vili Lehdonvirta, “Digital Labour and Development: Impacts of Global Digital Labour Platforms and the Gig Economy on Worker Livelihoods”, &lt;em&gt;Transfer: European Review of Labour and Research&lt;/em&gt;, 2017.&lt;/p&gt;
&lt;p&gt;[^37]:  Uma Rani and Marianne Furrer, “Digital Labour Platforms and New Forms of Flexible Work in Developing Countries: Algorithmic Management of Work and Workers”, Competition and Change, 2021; International Labour Organization, &lt;em&gt;World Employment and Social Outlook 2021: The Role of Digital Labour Platforms in Transforming the World of Work&lt;/em&gt;, 2021&lt;/p&gt;
&lt;p&gt;[^38]:  Kelle Howson, Alessio Bertolini, Srujana Katta, Funda Ustek‑Spilda, and Mark Graham, “The Emerging Geographies of Platform Labour: Intensifying Trends in Global Capitalism”. In Valerio De Stefano et al., eds., &lt;em&gt;A Research Agenda for the Gig Economy and Society&lt;/em&gt;, 2022; James Muldoon and Boxi A. Wu, “Artificial Intelligence in the Colonial Matrix of Power”, &lt;em&gt;Philosophy &amp;amp; Technology&lt;/em&gt;, 2023.&lt;/p&gt;
&lt;p&gt;[^39]:  Mohammad Amir Anwar and Mark Graham, &lt;em&gt;The Digital Continent: Placing Africa in Planetary Networks of Work&lt;/em&gt;, 2022; Kelle Howson, Hannah Johnston, Matthew Cole, Fabian Ferrari, Funda Ustek‑Spilda, and Mark Graham, “Unpaid Labour and Territorial Extraction in Digital Value Networks”, &lt;em&gt;Global Networks&lt;/em&gt;, 2023.&lt;/p&gt;
&lt;p&gt;[^40]:  Valerio De Stefano, “The Rise of the ‘Just‑in‑Time Workforce’: On‑Demand Work, Crowd Work and Labour Protection in the ‘Gig‑Economy’”, &lt;em&gt;SSRN Electronic Journal&lt;/em&gt;, 2015.&lt;/p&gt;
</content:encoded></item><item><title>At Google</title><link>https://disjunctionsmag.com/articles/at-google</link><guid isPermaLink="true">https://disjunctionsmag.com/articles/at-google</guid><description>Organising at the digital arms dealer</description><pubDate>Sun, 25 Jan 2026 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;&lt;em&gt;Over the past few years, Google have aggressively suppressed concerns and discussions amongst its employees who opposed the company’s rapid transformation into a digital arms dealer. And as things stand, one of their first major military contracts has made them complicit in genocide. This is a description of worker organising at Google, and of the repression that ensued in response.&lt;/em&gt;&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;In April 2021, a self-organised group of workers at Google and Amazon came together to form &lt;em&gt;No Tech for Apartheid&lt;/em&gt; (NoTA). These workers were organising in response to Project Nimbus, a $1.2 billion cloud services contract between their employers and the Israeli government and military.[^1] In response, they faced massive repression, and the more outspoken organisers were forced to resign.[^2] Yet after the events of 7 October 2023, and the commencement of Israel’s brutal assault on the Gaza Strip, it became clear to many of us that it was highly likely that Google Cloud and Google’s AI platforms were being used in carrying out large-scale human rights violations.[^3] Consequently, we renewed our efforts to organise with NoTA.&lt;/p&gt;
&lt;p&gt;The first months of 2024 were decisive. In January, the International Court of Justice had already provided an advisory opinion, stating that it was plausible that Israel was committing a genocide in Gaza, and invoking member states’ obligations to prevent genocide. Not two months later, Google fired another one of our organisers.[^4] In early April, the world learned about &lt;em&gt;Project Lavender&lt;/em&gt; and &lt;em&gt;Where’s Daddy&lt;/em&gt;, the artificial intelligence tools used by the IDF to target Palestinians in Gaza.[^5] And ten days later, we found out that Google had agreed to provide the Israeli Ministry of Defence with access to Google Cloud’s Big Data and AI services.[^6] The Ministry, which maintained that Project Nimbus was “not directed at highly sensitive, classified, or military workloads relevant to weapons or intelligence services”, was even given a 15% discount on consultations.&lt;/p&gt;
&lt;p&gt;It was clear to us at this point that a genocide was unfolding, and that Google were materially involved. Workers at NoTA therefore decided to organise a livestreamed sit-in to force Thomas Kurian — the CEO of Google Cloud — to meet with us and discuss three demands: drop Project Nimbus; bring an end to the discrimination and harassment of our Palestinian and Muslim colleagues; and address the doxxing and retaliation against workers who spoke out.[^7]&lt;/p&gt;
&lt;p&gt;The demonstrations were met with further repression. Google tried to get workers at the sit-in arrested; likely at Kurian’s behest.[^8] By the end of April, they had fired a total of 50 employees, including workers who were just distributing flyers or were only marginally associated with the protests.[^9] They justified the firings by claiming that protestors had defaced property and physically impeded the work of other employees. This was a complete fabrication. Shortly after the firings, employees also received a threatening email from Chris Rackow — Google’s head of security, with former ties to the Navy SEALs and the FBI.[^10] They were informed in no uncertain terms that if they thought Google would overlook such conduct, they had better “think again”. In a concurrent email sent by Sundar Pichai, employees were informed that Google was a business, not a place to debate politics. Borrowing from the tactics used &lt;em&gt;ad nauseam&lt;/em&gt; by universities across the U.S. and Europe, Pichai maligned the protests as making other workers feel unsafe.[^11]&lt;/p&gt;
&lt;p&gt;The message was clear — employees were to keep their heads down and stay silent, even when their work was being used to enable genocide.&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;This was not, of course, the first time Google have seen internal protests against military contracts. In 2018, workers protested against &lt;em&gt;Project Maven&lt;/em&gt;, a machine learning contract between Google and the Department of Defence.[^12] As with Nimbus, Google had initially lied about the scope of the project; it took internal leaks to make it clear that the project entailed training AI systems that would use drone imagery to make targeting decisions.[^13] These revelations sparked heated internal discussions, resulting in petitions that were signed by thousands of workers.[^14]&lt;/p&gt;
&lt;p&gt;The more permissive political climate at the time meant that workers were less afraid to speak out, and felt that they could discuss their concerns with their managers, and that they were being listened to. The protests ended up being broadly successful. Google announced that Project Maven would end in 2019, when the original contract expired, and a set of nebulous “AI principles” was introduced to placate worker discontent — a core promise being that Google would not develop or deploy artificial intelligence for weaponry and surveillance, or for projects that would violate human rights.[^15] This victory was, however, marred by a simultaneous statement issued by Kent Walker — Google’s President of Global Affairs, better known today as their public face, amidst the antitrust lawsuits against them. Walker was quick to reassure the Pentagon that Google were “eager to do more”, and that the cancellation of Project Maven would not hinder their other work with governments and defence departments around the world.[^16]&lt;/p&gt;
&lt;p&gt;Indeed, shortly thereafter, company policies and internal structures were altered to avoid a repeat of the Project Maven embarrassment. “Communicate With Care” — Walker’s brainchild — was an elaborate cover-up scheme that involved automatically deleting internal chat messages, and labelling routine emails as being under attorney-client privilege.[^17] And after Project Maven’s cancellation, Walker went on to institute Google’s internal need-to-know policy, in order to control what information could and could not be shared amongst teams.[^18] The policy was also accompanied by new community guidelines that discouraged discussing politics at work. All in all, this was an assault on Google’s hitherto relatively open culture of internal information sharing; an attempt by management to preempt any further leaks on ethically questionable projects.&lt;/p&gt;
&lt;p&gt;The next change that Google’s executives enforced was to consolidate power at the very top of the reporting chain. Part of the reason the protests against Maven were successful was that many senior directors and vice-presidents had been receptive to employee concerns. By the time Nimbus came around, the power to address these concerns — or even to discuss them — had long been stripped from the same managerial class, whose job roles had been reduced to the handling of routine administration. Google also ramped up its policing of internal mailing lists and meme-sharing websites, which had served as a portal to critique company policies.[^19] Any mention of the word “genocide” would get emails and memes removed by moderators; and at one point, even the word “killed” — especially in the context of Gaza — could get a post removed. Once again, the justification was that these posts might distress employees. And by the end of 2024, Google’s clampdowns had reached the point where all offices now featured a “no unauthorised posters” policy. Repeated offences could lead to managerial involvement or to a meeting with HR. It had effectively become close to impossible to find discussion of Project Nimbus on Google’s internal networks — particularly ironic for a company that publicly claims to be organising the world’s information and making it accessible. Nevertheless, Google workers continued to try and share information by flyering, postering, or even writing URLs to discussions on office whiteboards — all approaches that were now prohibited under the new policy.&lt;/p&gt;
&lt;p&gt;Ultimately, the tech industry’s broader job malaise also played a significant role in silencing many employees. In 2022, Google joined the rest of the tech world in announcing mass layoffs; a year later, they laid off  another 12,000 workers.[^20] This malaise provided executives with extra leverage over employees — after all, who would want to voice their opinion regarding their employer’s complicity in genocide, with the sword of unemployment always hanging over their heads?&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;All these challenges notwithstanding, NoTA have continued to work against Project Nimbus. Our members have used communication and organisational tools outside the corporate network to organise effectively and safely. We have found increasingly creative ways to reach our colleagues and to share a steady stream of reports on Google’s complicity in the genocide. We have worked with Francesca Albanese — the UN Special Rapporteur on the Occupied Palestinian Territories — to document this complicity.[^21] And our information campaigns have pushed Google DeepMind — Google’s cutting-edge AI research department — to suppress its workers’ unionisation efforts; efforts aimed at preventing the use of DeepMind’s work by the IDF and by militaries in general.[^22]&lt;/p&gt;
&lt;p&gt;Yet the central obstacle facing tech workers is not the difficulty of consolidating or sharing information, however deliberately companies may try to obstruct this. Our biggest challenge is that the tech industry has little history of unionisation. Project Nimbus represents the first experience of organised labour for many tech workers. With support from organiser networks and established unions, we have focused on providing organiser training to those getting involved. This work is essential because, if we are ever to succeed, meaningful relationships must be built among workers who have been systematically isolated and fragmented into silos. Workers must also come to recognise the collective power they hold, even in the face of explicit threats to their livelihoods.&lt;/p&gt;
&lt;p&gt;This is particularly true given the specificities of the cloud services provided by Google, Amazon, and Microsoft. First, it is precisely these services that are presently implicated in war crimes. Google own everything in their cloud platform “stack” — they rent out platform access to their customers, amongst whom we find the Israeli military. Google executives have routinely resorted to platitudes when asked for accountability: customers must follow Google’s Acceptable Use Policy; the use of Cloud services to harm people is explicitly prohibited; and so on. However, as Microsoft’s recent admissions reveal, the ability to actually verify that these policies are being respected is directly at odds with the data privacy policies that these cloud platforms offer, making it impossible for companies to even know how their platforms are being used.[^23]&lt;/p&gt;
&lt;p&gt;Second, working on these services actually affords workers significant leverage. Cloud platforms differ from other dual-use technologies — such as communication hardware, computer chips, or conventional software — in that they are not simply designed, shipped, and deployed. They are live infrastructures that require continuous human labour to stay functional. Software engineers, site reliability engineers responsible for monitoring uptime, network and hardware engineers who keep data centres operational, are all indispensable to this ongoing maintenance. This serves as a source of worker power: without labour, these systems will come to a grinding halt. Through NoTA’s organiser training efforts, we have been able to bring hundreds of Google workers into the organisation over the past two years, mobilising them to participate in actions and campaigns, and making this power visible through political education and organising.&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;NoTA continue to work toward the cancellation of Project Nimbus. The stakes today, however, extend well beyond that one contract. Since Donald Trump’s election, Google have moved rapidly to consolidate their position as the primary provider of the infrastructures of surveillance and oppression. In just the months following the inauguration, they have abandoned their pledge to not use artificial intelligence for surveillance or weaponry; they have begun work with US Customs and Border Protection to augment the southern border’s surveillance infrastructure (provided by Elbit) with AI capabilities; they have entered into an AI Lab partnership with Lockheed Martin, to use AI in targeted weapon systems; they have unveiled a collaboration with Palantir to accelerate the deployment of Google Cloud for sensitive government and military applications; and they have provided ICE with data about Palestine activists in the United States.[^24]&lt;/p&gt;
&lt;p&gt;Google, like most Israeli arms manufacturers, have discovered the utility of the “Palestine laboratory”; of using Palestinians as test subjects for their surveillance and digital arms infrastructures.[^25] They hope, no doubt, that the use of these technologies in Gaza will serve as marketing material when the time comes to sell to the next oppressive government or military. Opposing this project of domination — particularly when it comes to surveillance and weaponry — requires us to harness multiple forms of power and resistance. Thus, while NoTA continue to build power on the inside, we are also looking to build ties with students, activists, academics, AI practitioners, and human rights organisations to pressure Google from the outside. Our collective freedom and our future depend upon it.&lt;/p&gt;
&lt;p&gt;Free Palestine. Free all of us.&lt;/p&gt;
&lt;p&gt;[^1]:  Amitai Ziv, “Israel Picks Google, Amazon for Massive Official Cloud; &apos;Data Will Remain Here&apos;”, &lt;em&gt;Haaretz&lt;/em&gt;, 21 April 2021.&lt;/p&gt;
&lt;p&gt;[^2]:  Nico Grant, “Google Employee Who Played Key Role in Protest of Contract With Israel Quits”, &lt;em&gt;The New York Times&lt;/em&gt;, 30 Aug 2022.&lt;/p&gt;
&lt;p&gt;[^3]:  The IDF later claimed that they would not have been able to continue their operations in Gaza without the help of public cloud platforms, since their internal cloud systems were getting overloaded. See: Yuval Abraham, “‘Order from Amazon’: How tech giants are storing mass data for Israel’s war”, &lt;em&gt;+972 Magazine,&lt;/em&gt; 4 August 2024.&lt;/p&gt;
&lt;p&gt;[^4]:  Billy Perrigo, “Exclusive: Google Workers Revolt Over $1.2 Billion Contract With Israel”, &lt;em&gt;TIME&lt;/em&gt;, 10 April 2024.&lt;/p&gt;
&lt;p&gt;[^5]:  Yuval Abraham, “‘Lavender’: The AI machine directing Israel’s bombing spree in Gaza”, &lt;em&gt;+972 Magazine&lt;/em&gt;, 3 April 2024.&lt;/p&gt;
&lt;p&gt;[^6]:  Billy Perrigo, “Exclusive: Google Contract Shows Deal With Israel Defense Ministry”, &lt;em&gt;TIME&lt;/em&gt;, 12 April 2024.&lt;/p&gt;
&lt;p&gt;[^7]:  Wendy Lee, “Google employees stage sit-ins to protest company’s contract with Israel”, &lt;em&gt;Los Angeles Times&lt;/em&gt;, 16 April 2024. For the livestream, see: https://www.twitch.tv/notech4apartheid/clip/BloodyExquisitePigDogFace-wRpiAtkw4wWAfByl&lt;/p&gt;
&lt;p&gt;[^8]:  Hayden Field, “Google workers arrested after nine-hour protest in cloud chief’s office”, &lt;em&gt;CNBC&lt;/em&gt;, 17 April 2024.&lt;/p&gt;
&lt;p&gt;[^9]:  No Tech For Apartheid Campaign, “STATEMENT from Google workers organizing with the No Tech for Apartheid campaign on Google’s firings of 50 total workers”, &lt;em&gt;Medium&lt;/em&gt;, 23 April 2024. These firings included a Palestinian worker who had briefly stopped by to show his support. See: Chloe Berger, “Ex-Googler and Palestinian-American fired for opposing Project Nimbus speaks out: ‘This was not my idea of what the American workplace should be’”, &lt;em&gt;FORTUNE&lt;/em&gt;, 23 April 2024.&lt;/p&gt;
&lt;p&gt;[^10]:  Alex Heath, “Google fires 28 employees after sit-in protest over Israel cloud contract”, &lt;em&gt;The Verge&lt;/em&gt;, 18 April 2024.&lt;/p&gt;
&lt;p&gt;[^11]:  Robert Hart, “Google Fires More Workers Over Israeli Cloud Contract Protest After CEO Says Leave Politics At Home”, &lt;em&gt;Forbes&lt;/em&gt;, 23 April 2024.&lt;/p&gt;
&lt;p&gt;[^12]:  Azad Essa, “&apos;Google chooses apartheid over justice&apos;: Workers protest against Project Nimbus”, &lt;em&gt;Middle East Eye&lt;/em&gt;, 9 September 2022.&lt;/p&gt;
&lt;p&gt;[^13]:  Lee Fang, “Leaked Emails Show Google Expected Lucrative Military Drone AI Work to Grow Exponentially”, &lt;em&gt;The Intercept&lt;/em&gt;, 31 May 2018.&lt;/p&gt;
&lt;p&gt;[^14]:  Scott Shane, Cade Metz &amp;amp; Daisuke Wakabayashi, “How a Pentagon Contract Became an Identity Crisis for Google”, &lt;em&gt;The New York Times&lt;/em&gt;, 30 May 2018.&lt;/p&gt;
&lt;p&gt;[^15]:   Erin Griffith, “Google Won&apos;t Renew Controversial Pentagon AI Project”, &lt;em&gt;Wired&lt;/em&gt;, 1 June 2018; Devin Coldewey, “Google introduces &apos;AI principles&apos; that prohibit its use in weapons &amp;amp; human rights abuses”, &lt;em&gt;Business and Human Rights Centre&lt;/em&gt;, 18 July 2018.&lt;/p&gt;
&lt;p&gt;[^16]:  Sydney J. Freeberg Jr., “Google To Pentagon: ‘We’re Eager To Do More’”, &lt;em&gt;Breaking Defense&lt;/em&gt;, 5 November 2019.&lt;/p&gt;
&lt;p&gt;[^17]:  “Amid New Complaints from State AGs and Federal Judges, CA Bar Must Investigate Google’s Kent Walker”, &lt;em&gt;American Economic Liberties Project&lt;/em&gt;, 3 June 2025.&lt;/p&gt;
&lt;p&gt;[^18]:  Nick Bastone, “Google&apos;s new community guidelines tell employees not to talk politics on internal forums or bad mouth projects without &apos;good information&apos;”, &lt;em&gt;Business Insider,&lt;/em&gt; 23 August 2019.&lt;/p&gt;
&lt;p&gt;[^19]:  Nico Grant, “Google to Tone Down Message Board After Employees Feud Over War in Gaza”, &lt;em&gt;The New York Times&lt;/em&gt;, 8 April 2024.&lt;/p&gt;
&lt;p&gt;[^20]:  Q.ai, “Google Layoffs: Big Tech Continues Downsizing”, &lt;em&gt;Forbes&lt;/em&gt;, 23 November 2022; Adam Satariano &amp;amp; Nico Grant, “Google Parent Alphabet to Cut 12,000 Jobs”, &lt;em&gt;The New York Times&lt;/em&gt;, 20 January 2023.&lt;/p&gt;
&lt;p&gt;[^21]:  Harriet Williamson, “UN Calls Out Google and Amazon for Abetting Gaza Genocide”, &lt;em&gt;Progressive International&lt;/em&gt;, 26 August 2025.&lt;/p&gt;
&lt;p&gt;[^22]:  “DeepMind UK staff plan to unionise and challenge deals with Israel links, FT reports”, &lt;em&gt;Reuters&lt;/em&gt;, 26 April 2025.&lt;/p&gt;
&lt;p&gt;[^23]:  Harry Davis &amp;amp; Yuval Abraham, “Microsoft blocks Israel’s use of its technology in mass surveillance of Palestinians”, &lt;em&gt;The Guardian&lt;/em&gt;, 25 September 2025.&lt;/p&gt;
&lt;p&gt;[^24]:  See: Lucy Hooker &amp;amp; Chris Vallance, “Concern over Google ending ban on AI weapons”, &lt;em&gt;BBC News&lt;/em&gt;, 5 February 2025; Sam Biddle, “Google Is Helping the Trump Administration Deploy AI Along the Mexican Border”, &lt;em&gt;The Intercept&lt;/em&gt;, 3 April 2025; “Lockheed Martin and Google Cloud Announce Collaboration to Advance Generative AI For National Security”, &lt;em&gt;Google Cloud&lt;/em&gt;, 27 March 2025;  Leigh Palmer, “Google Public Sector and Palantir collaborate to bring Google Cloud to FedStart”, &lt;em&gt;Google Cloud Blog&lt;/em&gt;, 23 April 2025; Shawn Musgrave, “Google Secretly Handed ICE Data About Pro-Palestine Student Activist”, &lt;em&gt;The Intercept&lt;/em&gt;, 16 September 2025.&lt;/p&gt;
&lt;p&gt;[^25]:  Antony Loewenstein, &lt;em&gt;The Palestine Laboratory: How Israel Exports the Technology of Occupation Around the World,&lt;/em&gt; 2024.&lt;/p&gt;
</content:encoded></item><item><title>Why I&apos;m Leaving Big Tech</title><link>https://disjunctionsmag.com/articles/why-leaving-big-tech</link><guid isPermaLink="true">https://disjunctionsmag.com/articles/why-leaving-big-tech</guid><description>A tech worker&apos;s reflections</description><pubDate>Tue, 20 Jan 2026 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;After spending almost two decades in big tech, I was notified last month that I am being laid off. There have been massive waves of layoffs across the industry recently, and I am just one of the many tens of thousands of tech workers impacted.[^1] Nevertheless, the news marked a moment of great personal change for me, as it prompted me to finally gather the courage to make a decision I had been putting off for years. I am leaving Big Tech.&lt;/p&gt;
&lt;p&gt;I will no longer be pursuing any job opportunities in Big Tech or Silicon Valley-type startups. This is not a decision that I am making lightly. In fact, the intention to leave Big Tech has been constantly on my mind for the last several years. I extensively debated whether to publicise my decision, and finally convinced myself that it is important that I do. Conversations with friends, colleagues, and collaborators over the years have led me to believe that I am not alone in wrestling with this.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Why am I leaving Big Tech?&lt;/em&gt; There are several reasons. While I list a few below, I believe they stem from the same underlying structural problem: an unprecedented concentration of power in the hands of those in Big Tech who want to deliberately enact (or, at least, are incapable of imagining anything other than) a techno-fascist future. I believe that is the root cause of the momentous cultural and material changes we are witnessing across the industry.&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;Israel is committing a genocide in Gaza against the Palestinian people, one of the worst atrocities of our times. These deaths are a result of mass bombings, weaponised starvation, destruction of civilian infrastructure, attacks on healthcare workers and aid-seekers, and forced displacement. Big Tech corporations have not only played a pivotal role in materially supporting and profiting from this ongoing genocide over the last two and a half years, but have also ruthlessly silenced any dissenting voices amongst their workers.[^2]&lt;/p&gt;
&lt;p&gt;Years ago, I learned about the infamous history of how IBM, once the Big Tech institution of its day, had provided key technological support for the Holocaust committed by Nazi Germany against the Jewish people. How naïve I was to wonder how that could have happened; never, even in my wildest nightmares, did I imagine it would become the defining technological story of our generation.[^3]&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;A decade ago, just as I was starting my PhD in information retrieval (IR), I was part of an early cohort of researchers who saw significant potential in deep learning methods for IR tasks. I co-organised the first neural IR workshop at SIGIR, co-authored a book on the topic, co-developed the MS MARCO benchmark, and co-founded the TREC Deep Learning Track. Last year, I was awarded the ACM SIGIR Early Career Researcher Award for my research on neural IR. I mention this not to brag, but as evidence of the genuine excitement I have felt over the years regarding the scientific progress in machine learning that I have both witnessed and contributed to. But today, I am deeply disconcerted by the state of AI discourse.&lt;/p&gt;
&lt;p&gt;The hype itself is not a new phenomenon. Even as I was starting out in the field, I did not care much for the sudden rebranding of neural networks as “deep learning”. In fact, in much of my early work, I continued to use the phrase “neural IR” (shortening it to “neu-ir” to sound like “new IR”) over “deep learning for IR” and other such monikers. But the hype around “AI” has taken a much more menacing turn. It has turned into something akin to a religious cult and a project of empire building that is uncompromising in its opposition to critique. Tech companies are mandating that all teams embed large language models into every feature of every product and into their own daily workflows. Whether they are actually useful or not is completely beside the point. &lt;em&gt;Why?&lt;/em&gt; Because the evidence-free promises of AI utopia that tech “leaders” are so boldly prophesying are remarkably effective at making stock prices soar. No, AI will not be a “new digital species” (however much you try to anthropomorphise next-token prediction algorithms), nor will it be a wand that magically solves climate change or war or any of our other problems. But the grand fictitious narratives about AI, both the hype and the fearmongering, will continue to bolster claims of their “foundational” advancements, creating the conditions to commodify labor, renegotiate down worker compensation, and provide political cover for further dismantling of our social services. This will result in the largest ever accumulation of power and wealth in the hands of a diminishing few, while the legitimate needs of the people, from healthcare to education, are met with “let them eat chatbots”. That &lt;em&gt;is&lt;/em&gt; the intent and why AI is a project of class domination.&lt;/p&gt;
&lt;p&gt;This is not to say that technologies like language models cannot be useful. As a researcher, I am genuinely excited by their potential to enable more accessible forms of knowledge production. Yet technological artefacts cannot be separated from the conditions under which they are created, or from the realities of who controls and profits from them. Today, developing these technologies expands racial capitalism, intensifies imperialist extraction, and reinforces the divide between the global North and South. The technology is inseparable from the labour that produces it — the expropriation of work by writers, artists, programmers, and peer-production communities, as well as the highly exploitative crowdwork of data annotation.&lt;/p&gt;
&lt;p&gt;As an IR researcher, I am particularly alarmed by the uncritical adoption of these technologies in information access, which has been a focus of my own research.[^4] I am concerned that institutions with access to vast troves of behavioural data, when combined with generative AI’s capacity to produce persuasive language and imagery, will enable large-scale manipulation of public opinion. These tools may appear no more sinister than today’s conversational information systems, or take more explicit forms in the future, such as generative advertising. Imagine a world in which every online search or interaction with a digital assistant delivers information optimised to subtly influence your consumer preferences or political beliefs.&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;I harbour respect for those in the industry who are undertaking critical work on how AI can be genuinely useful to society. However, I am also tremendously concerned by the shrinking power of those critical voices. Those who do such work do so under incredible pressure and with tremendous risks to their careers.[^5] The boundaries of what you are allowed to critique are rapidly shrinking. You are allowed (for now) to get on a pulpit and talk about fairness and representational harms (don’t get me wrong, those are very important!) &lt;em&gt;as long as&lt;/em&gt; it paints the corporations as “responsible institutions trying to do the right thing for society”. But you’re never allowed to criticise the corporations, especially if it conflicts in any way with profitability. The bad actors in your threat models must always be &lt;em&gt;external to&lt;/em&gt; the corporations (and their owners). Never criticise the concentration of wealth and power in the hands of a few. And, definitely, never talk about the military-AI complex.[^6]&lt;/p&gt;
&lt;p&gt;The result is the securitisation of AI discourse, which today is often framed as “AI safety”, selectively omitting questions of social justice. When so-called Responsible AI or AI ethics is defined in ways that avoid confronting exploitation, war, colonial extraction, gendered and sexual violence, and other systems of oppression, then what are we even trying to do as a community?&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;I don’t want to sound blasé, but getting laid off may have been the best thing to happen to me last year. I don’t wish to minimise how difficult it is to be on the receiving end of such news, and I am well aware of my privilege, having permanent residence status in Canada and sufficient short-term financial stability. I don’t wish this on anyone, and my heart goes out to everyone who has been similarly impacted by the recent layoffs. If you have been affected and would like to talk, please reach out! But in my personal context, this sincerely feels like a blessing in disguise. It took me a while to acknowledge it, but every passing day since I got the news, I have genuinely felt more excited about the future.&lt;/p&gt;
&lt;p&gt;Over the years, I have had the immense privilege of working with many incredibly kind and thoughtful people who mentored, collaborated with, and shaped me as a researcher and as a person. I am filled with utmost gratitude to all of you, and I hope our paths will continue to cross!&lt;/p&gt;
&lt;p&gt;And as I look to the future, I am both excited and nervous. I want to spend more time reading and engaging with critical scholarship.[^7] I want to spend more time in movement spaces. I want to find people who are thinking about alternatives to Big Tech and fighting back against the global slide into techno-fascism. I want to continue working on information access and reimagine very different futures for how we, as individuals and as society, experience information.[^8] I want to explore spaces where I can conduct research explicitly grounded in humanistic, anti-capitalist and anti-colonial values. I want to continue my work on emancipatory information access and realise my research as part of my emancipatory praxis.[^9] And above all, I want to build technology that humanises us, connects us, liberates us, and gives us joy.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Another world is not only possible, she is on her way. On a quiet day, I can hear her breathing.&lt;/em&gt;
 — Arundhati Roy&lt;/p&gt;
&lt;p&gt;Abolish Big Tech. Free Palestine.&lt;/p&gt;
&lt;p&gt;[^1]:  Kate Park, Cody Corrall and Alyssa Stringer, “A Comprehensive List of 2025 Tech Layoffs”, &lt;em&gt;TechCrunch&lt;/em&gt;, 22 December 22 2025. https://techcrunch.com/2025/12/22/tech-layoffs-2025-list/.&lt;/p&gt;
&lt;p&gt;[^2]:  Noa Yachot, “‘Data Is Control’: What We Learned From a Year Investigating the Israeli Military’s Ties to Big Tech”, &lt;em&gt;The Guardian&lt;/em&gt;, 30 December 2025; Marwa Fatafta, “Big Tech and the Risk of Genocide in Gaza: What Are Companies Doing?”, &lt;em&gt;Access Now&lt;/em&gt;, 11 October 2024; Federica Marsi, “UN Report Lists Companies Complicit in Israel’s ‘Genocide’: Who Are They?”, &lt;em&gt;Al Jazeera&lt;/em&gt;, 1 July 2025; Naomi Nix, Nitasha Tiku and Trisha Thadani, “Big Tech Takes a Harder Line Against Worker Activism, Political Dissent”, &lt;em&gt;The Washington Post&lt;/em&gt;, 19 May 2025.&lt;/p&gt;
&lt;p&gt;[^3]:  Oliver Burkeman, “IBM ‘Dealt Directly With Holocaust Organisers’”, &lt;em&gt;The Guardian&lt;/em&gt;, 1 April 2002.&lt;/p&gt;
&lt;p&gt;[^4]:  Bhaskar Mitra, Henriette Cramer and Olya Gurevich, “Sociotechnical implications of generative artificial intelligence for information access”. In Ryen W. White &amp;amp; Chirag Shah, eds. &lt;em&gt;Information Access in the Era of Generative AI&lt;/em&gt;, 2024.&lt;/p&gt;
&lt;p&gt;[^5]:  Gerrit De Vynck and Will Oremus, “As AI booms, tech firms are laying off their ethicists”, &lt;em&gt;The Washington Post,&lt;/em&gt; 3 April 2023. https://www.washingtonpost.com/technology/2023/03/30/tech-companies-cut-ai-ethics/.&lt;/p&gt;
&lt;p&gt;[^6]:  Brian J. Chen, Tina M. Park and Alex Pasternack, “Booming Military Spending on AI Is a Windfall for Tech—and a Blow to Democracy”, &lt;em&gt;Tech Policy Press&lt;/em&gt;, 20 June 2025; Ioannis Kalpouzos, “Killer Robots and the Fetish of Automation”, &lt;em&gt;Jacobin&lt;/em&gt;, 3 January 2026.&lt;/p&gt;
&lt;p&gt;[^7]:  “What Am I Reading?”, https://bhaskar-mitra.github.io/reading/.&lt;/p&gt;
&lt;p&gt;[^8]:  Bhaskar Mitra, &quot;Search and Society: Reimagining Information Access for Radical Futures&quot;, &lt;em&gt;Information Retrieval Research Journal (IRRJ),&lt;/em&gt; 2025.&lt;/p&gt;
&lt;p&gt;[^9]:  Bhaskar Mitra, &quot;Emancipatory Information Retrieval”, &lt;em&gt;Information Retrieval Research Journal (IRRJ)&lt;/em&gt;, 2025.&lt;/p&gt;
</content:encoded></item><item><title>Always Already Betas</title><link>https://disjunctionsmag.com/articles/always-already-betas</link><guid isPermaLink="true">https://disjunctionsmag.com/articles/always-already-betas</guid><description>The user subject and the GPT grindset</description><pubDate>Sun, 11 Jan 2026 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;One striking feature of contemporary “artificial intelligence”, whatever that might mean today, is that it is somehow nothing and everything at the same time.[^1] On the one hand, it demonstrates impressive feats of aggregation[^2] and compression;[^3] on the other, it fails spectacularly at tasks of both logic (expected, given the technical structure of large language models) and recall (underexpected, because wasn’t this one of the defining “advances” made by AI?)[^4] The result, then, is a temporal distention, where generative AI in 2025 is &lt;em&gt;kinda&lt;/em&gt; already here but also always only &lt;em&gt;really&lt;/em&gt; arriving in the future — either via improvements or via the holy grail of AGI. The latter part of this temporality can be understood as a scalar movement through the speculative, where the next-word prediction (of an LLM) and the next-moment prediction (of financial investments) find themselves entangled in the look to the future: what may happen? What may be wanted?[^5] An important part, however, of this speculative dimension is one that is embodied in the user: the subject &lt;em&gt;par excellence&lt;/em&gt; of the service economy.[^6] If all of us are users — and if this has somehow superseded our being citizens or workers (big if) — then perhaps it is worth asking what &lt;em&gt;kind&lt;/em&gt; of user &lt;em&gt;this&lt;/em&gt; particular user (of generative AI) is.&lt;/p&gt;
&lt;p&gt;I want to argue here that this user is a beta: a tester, first and foremost, even before they get to be a consumer. Further, this “being beta” marks a specific, testy relationship between citizens of the global North and the matrices of extraction that they are caught in today.[^7]&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;It might be evident to some readers how we are all testers today (some of us more than others, of course). As tech companies fail to generate the profits they desire from infrastructurally expensive chatbots, cheap VC-funded chats with LLMs end up underscoring how generative AI is truly a solution in search of a problem.[^8] Each interaction with an LLM, then — be it awe-inducing or underwhelming or somewhere in between — is an attempt to automate (part of) some other labour pipeline. In doing so, it also becomes an attempt to gather more data on said pipeline, while the whole world throws the kitchen sink at a piece of technology that has a very specific &lt;em&gt;modality&lt;/em&gt; — one that masquerades as an all-purpose suggestion of a possibly useful tool.[^9] It is true that there is often some alpha testing that takes place inside company/lab offices before a new model is released; but by and large, it is clear that the mantle of beta testing now falls squarely upon all of us.[^10] This has a long recent history: companies once outsourced private R&amp;amp;D and the development of intellectual property to the much cheaper, subsidised public education system;[^11] they outsourced the labour of writing code to the global South;[^12] and they crowdsourced insights and data collection to all of us, and to our sociality by extension, making us complicit in the very production of the machines that we use.[^13] Today, they outsource not just the personal, cultural, and social implications of their product (the so-called alignment), but also the very product-ness of their product (what it is for, what it does, what it cannot do) to millions of beta testers today.[^14] If we cannot rely upon the given product to do what we want it to do (or hope for it to do, or delude ourselves into thinking it can do), then it contributes to the general precariousness of our existence under late capitalism, while also reminding us how some technologies were never tools — and perhaps could never be(come) tools.[^15]&lt;/p&gt;
&lt;p&gt;The subjective move from being a user to being a beta tester/user does not just tell a straightforward tale of expanding work-as-precarity, but also signifies the present-day social relationships between technology and extraction writ large.[^16] At the heart of this move is how extraction is enabled across distributions of time and space. Two simple terms-as-models can help elucidate what is not being straightforwardly discussed here. The first is &lt;em&gt;extraction&lt;/em&gt; (of resources, as surplus value or via explicit violence) from one part of the world (say South) to another (say North); the second is &lt;em&gt;accumulation&lt;/em&gt; (of resources, such as by means of enclosure), and the entrenchment of a given state of materialities. What is at stake here is instead the act of &lt;em&gt;imagination&lt;/em&gt; of an operation — what even is going on when a user meets the so-called used product? — that is being extracted and accumulated.&lt;/p&gt;
&lt;p&gt;To refract via some conventional Marxist frameworks, the user is being asked to consider possible reifications, and to hand over the blueprints for the same to the capitalist.[^17] The user sits down at the screen and thinks step-by-step with the machine, a common prompt-interface modality, and throws possible use-cases at the model — tracking the efficacy of the fledgling cyber-homunculus,[^18] guiding and coaching it into usability, into getting better at tasks that will one day, in the future, be automated.[^19] Perhaps the real agentic AI was the users we agent-ified all along.&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;Allow me to ruminate on the beta-ness of it all. On the one hand, as a compulsory beta tester, every user makes deals with the incompleteness and the ongoing-ness of the state of affairs around us. If something is broken or wrong at the moment, it is because it will be fixed in the future (by more of the same).[^20] Even if a promise was flawed, in the moment of interaction, maybe it’s because &lt;em&gt;we&lt;/em&gt; — the betas[^21] — didn’t try hard enough?[^22]&lt;/p&gt;
&lt;p&gt;On the other hand, a beta tester/user also emerges as &lt;em&gt;a&lt;/em&gt; beta, the perpetually embarrassed loser in the folk pseudo-socio-psychology that is the wolf-pack nonsense. A beta, in this sense, is someone not atop the pecking order; a static mirage observed by some biologists, who themselves now realise how familial solidarities are a more complicated affair in animals and humans alike. But our beta user never got that memo from ChatGPT. He — and now it is a he — still wants to be an alpha; and a sigma grindset, leading him to higher productivity through automation and assistance, is his way to get there. But in this desire, too, he remains a beta, with LLMs undercutting his productivity, while at the same time, increasing the speculative productivity of a future market thoroughly infused with GAI or AGI. Like all stupid pop-psych garbage, the beta as a marker remains typologised unfairly, but in a way that structurally prohibits the move out of the stereotype. Always already a beta, tells the structure to the beta user. And to think: this beta user is not even the &lt;em&gt;real&lt;/em&gt; worker, who is elsewhere — in the global South, inside the factories of materiality.[^23]&lt;/p&gt;
&lt;p&gt;And yet, there is revenge in the offing. Alongside the secret sauce of the operationalisation of intellectual processes (or not-so-secret; in most cases, it is just tacit knowledge being articulated symbolically), the beta user ends up polluting the very well that holds his extracted insights.[^24] As is being demonstrated by the recent flagging results from bigger and newer LLMs — and the monster of scale could certainly never have been slain so easily — the very fact of knowledge extraction comes fully equipped with its own dialectical movement: the extraction of ignorance.[^25] In this global monkeys-on-a-typewriter experiment, OpenAI (and Google, and Meta, and so on) expect an eventual convergence between experts and expert tasks; in other words, the companies assume and hope that if enough experts train our systems for long enough, our systems will one day exhibit the same expertise.[^26] But simply because most of what we do online, or on computational media writ large, is stuff that we have little clue about, the beta users end up conveying even more of wrong-ness and &lt;em&gt;doesn’t-work-that-way&lt;/em&gt;-ness than of what is actually right, or of how to do or evaluate something. In this regard, the “we do not use student data” move by AI companies should not be read solely as a legal arse-covering (even though it is one: several legal frameworks have strict codes about what information can and cannot be shared outside a prescribed educational environment). It is concurrently  also an attempt to channel away the worst pollutants of this future well of (reification) wisdom: the students who are clearly still learning, and often simultaneously trying not to learn, as all learning is, by definition. By clearly marking such interactions separately, corporations hope to rejuvenate the dying model-cycle, which was already showing clear signs of decay — either by disease (of existence), or by knowledge pollution, or through the antinomies of synthetic data.[^27] If not the final, then perhaps the penultimate laugh — a maniacal laugh — is the beta user’s; a moment of latent sigma-fication.&lt;/p&gt;
&lt;p&gt;The true sigma move, in this (mildly) new set of social relations, then, I argue, is to be as stupid as is humanly possible. I promise to do my bit. Will you do yours?&lt;/p&gt;
&lt;p&gt;[^1]:  Lucy Suchman, “The Uncontroversial ‘Thingness’ of AI”, &lt;em&gt;Big Data &amp;amp; Society&lt;/em&gt;, 2023.&lt;/p&gt;
&lt;p&gt;[^2]:  Fernando van der Vlist &lt;em&gt;et al.&lt;/em&gt;, “The Political Economy of AI as Platform: Infrastructures, Power and the AI Industry”, &lt;em&gt;AoIR Selected Papers of Internet Research&lt;/em&gt;, 2024. See also: Dieuwertje Luitse, “Platform Power in AI: The Evolution of Cloud Infrastructures in the Political Economy of Artificial Intelligence”, &lt;em&gt;Internet Policy Review&lt;/em&gt;, 2024.&lt;/p&gt;
&lt;p&gt;[^3]:  Ted Chiang, “ChatGPT Is a Blurry JPEG of the Web”, &lt;em&gt;The New Yorker&lt;/em&gt;, 9 February 2023; Hito Steyerl, “Mean Images”, &lt;em&gt;New Left Review&lt;/em&gt;, 2023.&lt;/p&gt;
&lt;p&gt;[^4]:  For a discussion of LLMs and logic-oriented tasks see: Ranjodh Singh Dhaliwal, “A Few Notes on the Scalar Foundations of Foundation Models”, &lt;em&gt;Cambridge Forum on AI: Culture and Society&lt;/em&gt;, 2025. For a treatment of recall and retrieval see: Yunfan Gao et al., &lt;em&gt;Retrieval-Augmented Generation for Large Language Models: A Survey&lt;/em&gt;, arXiv, 2024.&lt;/p&gt;
&lt;p&gt;[^5]:  uncertain commons, &lt;em&gt;Speculate This!&lt;/em&gt;, 2013; Sun-ha Hong, “Prediction as Extraction of Discretion”, &lt;em&gt;Big Data &amp;amp; Society&lt;/em&gt;, 2023; Sun‐ha Hong, “PREDICTIONS WITHOUT FUTURES*”, &lt;em&gt;History and Theory&lt;/em&gt;, 2022.&lt;/p&gt;
&lt;p&gt;[^6]:  Tung-Hui Hu, &lt;em&gt;A Prehistory of the Cloud&lt;/em&gt;, 2016; Edoardo Biscossi, &lt;em&gt;The User and the Used: Platform Mediation, Labour and Pragmatics in the Gig Economy&lt;/em&gt;, 2022; Markus Krajewski, trans. Ilinca Iurascu, &lt;em&gt;The Server: A Media History from the Present to the Baroque&lt;/em&gt;, 2018; Ranjodh Singh Dhaliwal, “The Cyber-Homunculus: On Race and Labor in Plans for Computation”, &lt;em&gt;Configurations&lt;/em&gt;, 2022; Christian Ulrik Andersen and Søren Bro Pold, “The User as a Character, Narratives of Datafied Platforms”, &lt;em&gt;Computational Culture&lt;/em&gt;, 2021; Jones, Matthew L. “Users Gone Astray: Spreadsheet Charts, Junky Graphics, and Statistical Knowledge”, &lt;em&gt;Osiris&lt;/em&gt;, 2023. Polina Kolozaridi, “Unstable Users: Coordinating the Configuration of Digital Objects and Projects”, &lt;em&gt;Technology and Language&lt;/em&gt;, 2025; Kushner, Scott. “The Instrumentalised User: Human, Computer, System”, &lt;em&gt;Internet Histories&lt;/em&gt;, 2021; Joanne McNeil, &lt;em&gt;Lurking: How a Person Became a User&lt;/em&gt;, 2019.&lt;/p&gt;
&lt;p&gt;[^7]:  Ranjodh Singh Dhaliwal, “Organic Division of Labor — Ergonomics/Cybernetics of Labor — Inorganic Division of Labor”. In Zach Blas et al., eds. &lt;em&gt;Informatics of Domination&lt;/em&gt;, 2025.&lt;/p&gt;
&lt;p&gt;[^8]:  As I note elsewhere in my work, only two proper problems seem to have been found until now: the drudgery that is educational output for metrics-based credentialing (cheating at the school/college level), and global loneliness (i.e. the rapid disappearance of sociality, and its replacement with networked intimacies). See Brian Merchant, “AI Generated Business: The Rise of AGI and the Rush to Find a Working Revenue Model”, &lt;em&gt;AI Now Institute,&lt;/em&gt; 2024; Ranjodh Singh Dhaliwal, “Generating an Artificial Democracy: On Sociological Intimacies of Bots and/as Personas”, &lt;em&gt;transmediale&lt;/em&gt;, 2025.&lt;/p&gt;
&lt;p&gt;[^9]:  Ranjodh Singh Dhaliwal, “The Infrastructural Unconscious: Do Computers Dream of Carbo-Silico Pipelines?”. In Bernhard Siegert and Benedikt Merkle, eds. &lt;em&gt;Reckoning with Everything&lt;/em&gt;; Ranjodh Singh Dhaliwal, “Concretion.: (Noun, ?1541 AD - Now)”, &lt;em&gt;Basel Media Culture and Cultural Techniques Working Papers&lt;/em&gt;, 2025.&lt;/p&gt;
&lt;p&gt;[^10]:  In a conventional software cycle, this is a phase where bugs are ironed out of fully functional software through private/public testing. See: Geoff Duncan, “Waiting with Beta’d Breath - TidBITS”, &lt;em&gt;TidBITS&lt;/em&gt;, 1996.&lt;/p&gt;
&lt;p&gt;[^11]:  Philip Mirowski, &lt;em&gt;Science-Mart: Privatizing American Science&lt;/em&gt;, 2011; Matthew Kirschenbaum and Rita Raley, “AI and the University as a Service”, &lt;em&gt;Publications of the Modern Language Association of America&lt;/em&gt;, 2024; Jacob H. Rooksby, &lt;em&gt;The Branding of the American Mind: How Universities Capture, Manage, and Monetize Intellectual Property and Why It Matters&lt;/em&gt;, 2016. Closely related to this notion of training in the educational and work-experience sense of social reproduction is, of course, the training data (needed for generating generative AI) and the training of generative AI (that happens during reinforcement learning or after a model has been released to the public).&lt;/p&gt;
&lt;p&gt;[^12]:  Sareeta Amrute, &lt;em&gt;Encoding Race, Encoding Class: Indian IT Workers in Berlin&lt;/em&gt;, 2016; Héctor Beltrán, &lt;em&gt;Code Work: Hacking across the US/México Techno-Borderlands&lt;/em&gt;. In Daniela Rivero, ed. Princeton Studies in Culture and Technology, 2023.&lt;/p&gt;
&lt;p&gt;[^13]:   Tiziana Terranova, &lt;em&gt;Network Culture: Politics for the Information Age&lt;/em&gt;, 2010; Tiziana Terranova, “Technoliberalism and the Network Social”, &lt;em&gt;Theory, Culture &amp;amp; Society&lt;/em&gt;, 2024; Tiziana Terranova, &lt;em&gt;After the Internet: Digital Networks between Capital and the Common&lt;/em&gt;, Semiotext(e) Intervention Series, 2022; Tiziana Terranova, “Free Labor”, &lt;em&gt;Social Text&lt;/em&gt;, 2000.&lt;/p&gt;
&lt;p&gt;[^14]:  Katia Schwerzmann and Alexander Campolo, “‘Desired Behaviors’: Alignment and the Emergence of a Machine Learning Ethics”, &lt;em&gt;AI &amp;amp; Society&lt;/em&gt;, 2025.&lt;/p&gt;
&lt;p&gt;[^15]:  Tools can be understood, in this context, as implements which have a straightforward user/used distinction, while technology as a system complicates it. For more, see Ranjodh Singh Dhaliwal and Bernhard Siegert, “Knowing, Studying, Writing: A Conversation on History, Practice, and Other Doings with Technics”, in Nicholas Baer and Annie Oever, eds., &lt;em&gt;Technics: Media in the Digital Age&lt;/em&gt;, 2024; Ranjodh Singh Dhaliwal, “What Do We Critique When We Critique Technology?”, &lt;em&gt;American Literature&lt;/em&gt;, 2023.&lt;/p&gt;
&lt;p&gt;[^16]:  Aaron Benanav, &lt;em&gt;Automation and the Future of Work&lt;/em&gt;, 2020.&lt;/p&gt;
&lt;p&gt;[^17]:  Dhaliwal, “The Cyber-Homunculus”; Timothy Bewes, &lt;em&gt;Reification, or, The Anxiety of Late Capitalism&lt;/em&gt;, 2022; Fredric Jameson, “Reification and Utopia in Mass Culture”, &lt;em&gt;Social Text&lt;/em&gt;, 1979.&lt;/p&gt;
&lt;p&gt;[^18]:  Or a “clanker”, if you have a different sense of civility than me: https://knowyourmeme.com/memes/clanker.&lt;/p&gt;
&lt;p&gt;[^19]:  Fabian Offert and Ranjodh Singh Dhaliwal, “The Method of Critical AI Studies, A Propaedeutic”, arXiv, 2024; Ben Grosser and Søren Bro Pold, “Reading the Praise/Prompt Machine: An Interface Criticism Approach to ChatGPT”, &lt;em&gt;Proceedings of the Sixth Decennial Aarhus Conference: Computing X Crisis&lt;/em&gt;, 2025; Sarah Burkhardt and Bernhard Rieder, “Foundation Models Are Platform Models: Prompting and the Political Economy of AI”, &lt;em&gt;Big Data &amp;amp; Society&lt;/em&gt;, 2024.&lt;/p&gt;
&lt;p&gt;[^20]:  Théo Lepage-Richer, “Adversariality in Machine Learning Systems: On Neural Networks and the Limits of Knowledge”. In Jonathan Roberge and Michael Castelle, eds. &lt;em&gt;The Cultural Life of Machine Learning: An Incursion into Critical AI Studies&lt;/em&gt;, 2021; Ranjodh Singh Dhaliwal et al., &lt;em&gt;Neural Networks&lt;/em&gt;, In Search of Media, 2024.&lt;/p&gt;
&lt;p&gt;[^21]:  In Hindustani, as in some other Indic languages, “beta” means “son” — indexing a paternalization inherent to being a beta.&lt;/p&gt;
&lt;p&gt;[^22]:  John Naughton, “Did AI Mania Rush Apple into Making a Rare Misstep with Siri?”, &lt;em&gt;The Guardian&lt;/em&gt;, 22 March 2025.&lt;/p&gt;
&lt;p&gt;[^23]:  Karen Hao, &lt;em&gt;Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI&lt;/em&gt;, 2025; Paola Tubaro et al., “The Trainer, the Verifier, the Imitator: Three Ways in Which Human Platform Workers Support Artificial Intelligence”, &lt;em&gt;Big Data &amp;amp; Society&lt;/em&gt;, 2020.&lt;/p&gt;
&lt;p&gt;[^24]:  Matteo Pasquinelli, &lt;em&gt;The Eye of the Master: A Social History of Artificial Intelligence&lt;/em&gt;, 2023; Hannes Bajohr, ed. &lt;em&gt;Thinking with AI: Machine Learning the Humanities&lt;/em&gt;, 2025; Leif Weatherby, &lt;em&gt;Language Machines: Cultural AI and the End of Remainder Humanism&lt;/em&gt;. Posthumanities, 2025.&lt;/p&gt;
&lt;p&gt;[^25]:  Jared Kaplan et al., “Scaling Laws for Neural Language Models”, arXiv, 2020; Ethan Caballero et al., &lt;em&gt;Broken Neural Scaling Laws&lt;/em&gt;, arXiv, 2022.&lt;/p&gt;
&lt;p&gt;[^26]:  See Brian Merchant’s excellent reporting on job losses, and on certain industries using workers (who are soon to be laid off) to make their AI slop look less sloppy. See also: Roland Meyer, “‘Platform Realism’. AI Image Synthesis and the Rise of Generic Visual Content”, &lt;em&gt;Transbordeur: photographie histoire société&lt;/em&gt;, 2025.&lt;/p&gt;
&lt;p&gt;[^27]:  Felicia Jing et al., “On Emplotment: Phantom Islands, Synthetic Data, and the Coloniality of Simulated Algorithmic Space”, &lt;em&gt;Social Text&lt;/em&gt;, 2026; Benjamin N. Jacobsen, “Machine Learning, Synthetic Data, and the Politics of Difference”, &lt;em&gt;Theory, Culture &amp;amp; Society&lt;/em&gt;, 2025; Shane Denson, “On the Very Idea of a (Synthetic) Conceptual Scheme”, &lt;em&gt;Philosophy &amp;amp; Digitality&lt;/em&gt;, 2025; David M. Berry, “Synthetic Media and Computational Capitalism: Towards a Critical Theory of Artificial Intelligence”, &lt;em&gt;AI &amp;amp; Society&lt;/em&gt;, 2025.&lt;/p&gt;
</content:encoded></item><item><title>Occupied Assets</title><link>https://disjunctionsmag.com/articles/occupied-assets</link><guid isPermaLink="true">https://disjunctionsmag.com/articles/occupied-assets</guid><description>Israeli neoliberalism and the datafication of Palestinian life</description><pubDate>Mon, 05 Jan 2026 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;In his book &lt;em&gt;Neoliberal Apartheid,&lt;/em&gt; Andy Clarno emphasises that “one of the most important impacts of neoliberal restructuring” is the “production of racialised surplus populations.”[^1] The domination of such populations takes the seemingly paradoxical form of their expulsion from the direct exploitation of their labour in the production process. Instead of exploitation, various forces — including displacement, automation, and financialisation — have resulted in a global expansion and proliferation of various forms of precarious life, as well as the emergence of new classes of disposable humanity. Tracking this process, Clarno leads us to Palestine, where this production of Palestinians as surplus humanity has been intrinsic to the Zionist state project. Indeed, he suggests, the neoliberal character of contemporary Israeli settler colonialism has emerged through calculatedly &lt;em&gt;avoiding&lt;/em&gt; the exploitation of Palestinians as labourers.&lt;/p&gt;
&lt;p&gt;Clarno glosses the prehistory of this decision as follows. After 1948 and into the mid-1980s, Palestinians were integrated into Israel’s economy by providing low-wage labour, mostly in construction and agriculture.[^2] However, beginning in the late 1980s, Israel’s shift towards a neoliberal economy diminished the need for Palestinian labour. As Israel transitioned to a high-tech economy, demand for industrial and agricultural workers dropped, and free trade agreements allowed Israel to outsource production — its textile industry, for example — to neighbouring countries. Meanwhile, Israel simultaneously tightened work permit restrictions for Palestinians and took advantage of newly accessible surpluses of precarious labour. As it brought in large numbers of noncitizen workers, it minimised its reliance on Palestinian labour.&lt;/p&gt;
&lt;p&gt;In 2006, the wholesale elimination of work permits for Palestinians in Gaza consolidated a situation in which unemployment rates continually grew to exceed 30%.[^3] Unemployment is only one small measure of a larger set of processes by which Israeli state policy has attempted to cast Palestinian life not only as a threat to its existence, but as a &lt;em&gt;particular kind&lt;/em&gt; of generally superfluous life. Such processes are the outcome of an epistemological project packaged in Israeli politics under the label of “security”. Indeed, the emergence of Israel as a giant in the defence and tech industries has persistently depended on its ability to render those deemed as “threats” perpetually available for knowledge extraction. Israel’s “success” is entirely contingent upon the forms of enclosure and predation to which it subjects Palestinians.&lt;/p&gt;
&lt;p&gt;To inform the state’s practices of social control and militarised occupation, Israel’s security industry offers a plethora of systems specifically designed to extract and process biometric, personal, and behavioural data. Palestinian life is, as promised, constantly surveilled. But Palestinians are not simply the objects of this project of knowability. They are also the testing ground through which the Israeli defence industry comes to learn about itself and understand its own capabilities — trialling their products on live human targets to hone and perfect them over time.&lt;/p&gt;
&lt;p&gt;To understand Palestinians as simultaneously disposable and central to the political and economic organisation of Israel is to point to a contradiction. “Despite Israel’s celebrated ‘disengagement’ [...] in 2005,” writes Clarno, Gaza “remains Israel’s principal laboratory for securitization and extraordinary violence.”[^4] The production of Palestinians as enemies (politically), as disposable or inessential labourers (economically), and, ultimately, as data-bodies and test subjects (epistemologically) has transformed them into objects of &lt;em&gt;assetisation&lt;/em&gt; for Israeli neoliberalism.&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;Birch and Muniesa theorise assetisation as an answer to a conceptual problem: in contemporary capitalism, what drives the accumulation of capital is no longer primarily the proliferation of the commodity form. In today’s technoscientific capitalism, they write, “the dominant &lt;em&gt;form&lt;/em&gt; […] is not the commodity but the asset.”[^5] They continue: “by asset, we mean something that can be owned or controlled, traded, and capitalised as a revenue stream […] it could be a piece of land, a skill or experience, a sum of money, a bodily function or affective personality, a life-form, a patent or copyright, and so on.” Assetisation offers an apt framework through which to interpret what appeared as a contradiction — the fact that the expulsion of Palestinians from the labour force has enabled new modes of technoscientific enclosure. In such contexts, it seems that new forms of accumulation collude with older, longer-established ones.&lt;/p&gt;
&lt;p&gt;In the neoliberal era, Israel’s state-constitutive practice of land theft has been intensified by new financial logics that allow the production of Palestinian vulnerability, the seizure of Palestinian land, and the creation of financial assets to flow directly into one another. In her work on public finance, Melinda Cooper challenges the conventional understanding of “tax breaks”, arguing that state-directed exemptions from taxation are designed to facilitate private investment choices that have “the same effect on Treasury accounts as direct government spending”.[^6] What is typically designated as a “break”, then, is better understood as a form of &lt;em&gt;state spending&lt;/em&gt; — hence the term “tax expenditures”. This framing clarifies cases such as Israel’s so-called “periphery tax break”, a fiscal measure intended to incentivise settlement development far from the state’s urban centres. In early November 2025, the Knesset — Israel’s unicameral legislature — authorised an additional 35 per cent in spending for settlements designated as “under threat”.[^7]&lt;/p&gt;
&lt;p&gt;Beyond the formal and informal militarisation of occupation, then, the development of private property through illegal settlements transforms violence against Palestinians into a mechanism of asset class formation. To Israelis and to potential investors recruited at synagogues and housing sales, the comparatively low cost of housing in illegal settlements in the West Bank has been rebranded as “affordable housing”, a solution to the so-called affordability crisis in Israel “proper”. This incentivisation — at the expense of Palestinian lives and livelihoods — further fuels the growth of transnational real estate and property-management industries, which leverage these state expenditures to market settlement housing to buyers in Los Angeles and New Jersey.[^8]&lt;/p&gt;
&lt;p&gt;Real estate is only one of a broader set of asset classes that have emerged from the production of Palestinian precarity. The ongoing manufacture of entitlement to Palestinian land is inextricable from (and routed through) another site of assetisation. Here, the consolidation of the settler-colonial state relies on a persistent narrative of vulnerability — one that casts the state as mortally threatened by its own victims, and thereby justifies situating Palestinians at the centre of neoliberal Israel’s extensive security economy.&lt;/p&gt;
&lt;p&gt;As Nadera Shalhoub-Kevorkian argues, the construction of Palestinians as security risks underwrites the quotidian surveillance they are subjected to. By incessantly monitoring Palestinians, Israel “seeks to incorporate them into the polity as threatening Others who must be placed under constant surveillance and control.”[^9] This everydayness of surveillance establishes the need for what Shalhoub-Kevorkian terms an “industry of fear”, a political economy that necessitates, by its very foundations, both the reproduction of fear and the promise of its overcoming.  Discursively, securitisation runs on the fumes of its own contradictions, continually needing to manufacture the problem it claims it will solve. It is here, where the politics of knowledge meets the extractivist economy of techno-Zionism, that datafication has emerged as an industry-defining force.&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;Datafication, the process of converting human life into quantifiable digital traces accessible to algorithmic analysis and manipulation, allows for the integration of Palestinian bodies into Israel’s neoliberal economy as sites of continuously extractable and computable data. Their data traces — captured at checkpoints, intercepted through phone networks, or scraped from social media — become the raw material of the assetisation of Palestinian life, datafication being its mining process. For Mejias and Couldry, datafication involves two inseparable elements: “the transformation of human life into data” through quantification, and “the generation of different kinds of value from data”.[^10]&lt;/p&gt;
&lt;p&gt;The first element of datafication, the quantification of life, requires mechanisms and infrastructures for data collection, compilation, processing, and storage. Israel deploys a panoply of data-capture systems — CCTV cameras, license plate readers, biometric checkpoint scanners, facial recognition cameras, computer vision-equipped drones, spyware, and social-media monitoring tools, and more — all of which continuously extract data about and from Palestinian life.[^11] Israel consolidates these accumulated abstractions in massive databases, such as the Wolf Pack database that assembles them into profiles about virtually every Palestinian, including their photographs, family members and history, educational status, and licence plates.[^12] Israel stores this data on in-house servers as well as cloud services that extend its storage capacity.[^13] The “near-limitless storage capacity” unlocked through these contracts means that Israel is not constrained by the need to focus on collecting data on specific surveillance targets.[^14] It can surveil everyone.[^15]&lt;/p&gt;
&lt;p&gt;The second dimension of datafication — that of generating value — includes “monetisation but also means of state control”.[^16] In the Israeli context, this surveillance data is formally subsumed under the amorphous and fungible category of “intelligence”. In other words, it exists to inform and direct military decision-making, such as the identification of “threats” and the determination of targets. In practice, soldiers and state-deputised settlers responsible for settlement security routinely access databases such as &lt;em&gt;Wolf Pack&lt;/em&gt; to justify checkpoint denials, arrests, raids, and the dispersals of protests.[^17] Israel values this datafication process not only for its role in decision-making but also for its legitimation of those decisions, including retroactively. Israeli military officers have openly acknowledged using Palestinians&apos; data to justify arrests or killings, &quot;even after the fact&quot;, noting that, &quot;[w]hen they need to arrest someone and there isn’t a good enough reason to do so, that’s where they find the excuse.&quot;[^18] More recently, these data traces have also been used to train and feed algorithmic models and systems that purport to automate the entire pipeline of evaluation and targeting. The most necropolitical manifestations of this are the &lt;em&gt;Lavender and The Gospel&lt;/em&gt; target-generation systems, which were revealed to be in operation in Gaza over the past two years. In real-time, these systems assign individuals and buildings numerical scores that mark them as targets for bombing. The kill lists generated by these systems were used by the Israeli military to carry out thousands upon thousands of nominally “targeted” bombings with no regard for casualties.[^19]&lt;/p&gt;
&lt;p&gt;Israel’s military occupation is far from the sole beneficiary of the datafication of Palestinians. Private providers of security technologies provide the state with the technical infrastructure through which vast quantities of data are collected, consolidated, and analysed. These companies also use this data to train and improve the systems they deploy in occupied Palestine, with the ultimate aim of selling them in a global marketplace hungry for tools to identify targets, monitor populations, and suppress dissent. Israeli security contractors like AnyVision (now renamed Oosto) have, for instance, deployed facial recognition systems across more than 115,000 cameras throughout the West Bank to track the movements of Palestinians.[^20] Variants of that same technology are now used in airports, train stations, and stadiums worldwide.[^21] Seeing as facial recognition systems require vast amounts of data in order to be accurate, it is not far-fetched to infer that the accumulation and processing of Palestinians’ facial data was instrumental in training the products that AnyVision sells for commercial use. Indeed, the company’s CEO has explicitly acknowledged that the technology was first validated in the West Bank — before generating more than 95 per cent of the company’s revenue through sales outside of Israel.[^22]&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;The “Palestinian Laboratory” has become a common framework for analysing how Israel uses Palestinian territories as sites for testing and refining its military, surveillance, and security technologies.[^23] Within this framing, Palestinians are often characterised as test subjects, a designation reinforced by Israeli weapons and security manufacturers’ use of the term “battle-tested” to market their products.&lt;/p&gt;
&lt;p&gt;The framing of Palestinians as “test subjects”, however, fails to grasp their centrality to the political economy of neoliberal Israel. Palestinians are not merely subjected to technological experimentation; they are fully embedded in the &lt;em&gt;training, operation, and optimisation&lt;/em&gt; of Israel’s security technologies. The digital traces of Palestinian life are inscribed across the entire lifecycle of the technologies Israel develops and sells globally. Moreover, the capacity to extract and generate value from Palestinians’ data enables a broader set of financial instruments that channel investment into Israel’s surveillance and securitisation industries.&lt;/p&gt;
&lt;p&gt;Seen this way, the extractive economy — and the necropolitical order that enables it — can be more accurately understood as one in which Palestinians themselves are rendered an &lt;em&gt;asset class&lt;/em&gt;. This reframing becomes especially salient when considering Israel’s dependence on the global security market. Indeed, Israel’s economic and geopolitical future is increasingly tied to the development and export of security technologies. As noted by Privacy International, Israel today is home to more surveillance companies per capita than any other country in the world.[^24] It is also the world’s leading exporter of spyware and digital forensics tools.[^25] In a mutually beneficial arrangement, the Israeli government and military rely on private Israeli firms — such as NSO Group, Cellebrite, Cytrox, and Candiru — to carry out their technological development, whether as contractors or as employers of tech workers enlisted to contribute expertise and added capacity while on reserve duty.[^26] In turn, these companies benefit from lenient export controls and capitalise on Israel’s diplomatic efforts, which often facilitate the sales of their technologies while simultaneously helping to normalise relations with purchasing countries.[^27]&lt;/p&gt;
&lt;p&gt;This reliance on Palestinian datafication creates a form of asset dependency. Palestinians, in essence, become a critical asset class upon which Israel’s futurity is built. This asset dependency exists in a complex relationship with the eliminationist character of Israeli settler-colonialism. To some extent, at least, it is clear that Israel’s security sector’s data regime depends on a surveillable population. Over the past two years, however, the mass killing and displacement of Palestinians has disrupted the very systems of surveillance and monitoring, and the stable supply of data upon which the security industry depends. Israeli military officers have, for instance, raised concerns about the impact destroying Gaza&apos;s telecommunications infrastructure would have on their ability to intercept and surveil Palestinians&apos; communication, citing a reduced “volume of phone calls in the territory”.[^28]&lt;/p&gt;
&lt;p&gt;The neoliberal project that has turned Palestinian life into an asset for its colonisers may now be forced to confront the possibility of its own exhaustion, as the occupation devours its own data substrate — collapsing the promise of an economy of total knowability under the weight of the state’s genocidal logics.&lt;/p&gt;
&lt;p&gt;[^1]:  Andy Clarno, &lt;em&gt;Neoliberal Apartheid: Palestine/Israel and South Africa After 1994&lt;/em&gt;, 2019, p. 15.&lt;/p&gt;
&lt;p&gt;[^2]:  Clarno, &lt;em&gt;Neoliberal Apartheid&lt;/em&gt;, p. 30.&lt;/p&gt;
&lt;p&gt;[^3]:  Palestinian Central Bureau of Statistics, &lt;em&gt;On the Occasion of International Workers’ Day: President of the Palestinian Central Bureau of Statistics Ms. Ola Awad Presents the Current Status of the Palestinian Labour Force&lt;/em&gt;, 2019.&lt;/p&gt;
&lt;p&gt;[^4]:  Clarno, &lt;em&gt;Neoliberal Apartheid&lt;/em&gt;, p. 42.&lt;/p&gt;
&lt;p&gt;[^5]:  Kean Birch and Fabian Muniesa, eds. &lt;em&gt;Assetization: Turning things into assets in technoscientific capitalism&lt;/em&gt;, 2020, pp. 1-2.&lt;/p&gt;
&lt;p&gt;[^6]:  Melinda Cooper, &lt;em&gt;Counterrevolution: Extravagance and austerity in public finance&lt;/em&gt;, 2024, p. 17.&lt;/p&gt;
&lt;p&gt;[^7]:  Noa Shpigel, “Knesset Advances Bill Granting Tax Breaks to Israeli West Bank Settlements in ‘Threatened’ Areas”, &lt;em&gt;Haaretz&lt;/em&gt;, 12 November 2025.&lt;/p&gt;
&lt;p&gt;[^8]:  Jonah Valdez, “The Companies Making It Easy to Buy in a West Bank Settlement”, &lt;em&gt;The Intercept&lt;/em&gt;, 11 July 2024.&lt;/p&gt;
&lt;p&gt;[^9]:  Nadera Shalhoub-Kevorkian, &lt;em&gt;Security Theology,&lt;/em&gt; &lt;em&gt;Surveillance and the Politics of Fear,&lt;/em&gt; 2015, pp. 5-7.&lt;/p&gt;
&lt;p&gt;[^10]:  Ulises A. Mejias and Nick Couldry, &quot;Datafication&quot;, &lt;em&gt;Internet Policy Review 8, no. 4&lt;/em&gt;, 2019.&lt;/p&gt;
&lt;p&gt;[^11]:  Sophia Goodfriend, &quot;Algorithmic State Violence: Automated Surveillance and Palestinian Dispossession in Hebron&apos;s Old City&quot;, &lt;em&gt;International Journal of Middle East Studies 55&lt;/em&gt;, no. 3, 2023, pp. 461-478.&lt;/p&gt;
&lt;p&gt;[^12]:  Elizabeth Dwoskin, “Israel Escalates Surveillance of Palestinians With Facial Recognition Program in West Bank”, &lt;em&gt;The Washington Post&lt;/em&gt;, 8 November 8 2021.&lt;/p&gt;
&lt;p&gt;[^13]:   Yuval Abraham, “‘Order From Amazon’: Tech Giants Storing Mass Data for Israel’s War”, &lt;em&gt;+972 Magazine&lt;/em&gt;. 4 August 2024.&lt;/p&gt;
&lt;p&gt;[^14]:  Harry Davies and Yuval Abraham, “‘A Million Calls an Hour’: Israel Relying on Microsoft Cloud for Expansive Surveillance of Palestinians”, &lt;em&gt;The Guardian&lt;/em&gt;, 7 August 2025.&lt;/p&gt;
&lt;p&gt;[^15]:  Lubna Masarwa, “Israel Can Monitor Every Telephone Call in West Bank and Gaza, Says Intelligence Source”, &lt;em&gt;Middle East Eye&lt;/em&gt;, 17 November 2021.&lt;/p&gt;
&lt;p&gt;[^16]:  Mejias and Couldry, &quot;Datafication&quot;, p. 3.&lt;/p&gt;
&lt;p&gt;[^17]:  Breaking the Silence, &lt;em&gt;Military Rule: Testimonies of Soldiers from the Civil Administration, Gaza DCL and COGAT, 2011–2021,&lt;/em&gt; 2022*.*&lt;/p&gt;
&lt;p&gt;[^18]:  Davies and Abraham, “‘A Million Calls an Hour’: Israel Relying on Microsoft Cloud for Expansive Surveillance of Palestinians”.&lt;/p&gt;
&lt;p&gt;[^19]:  Yuval Abraham, “‘Lavender’: The AI Machine Directing Israel’s Bombing Spree in Gaza”, &lt;em&gt;+972 Magazine&lt;/em&gt;, 25 April 2024.&lt;/p&gt;
&lt;p&gt;[^20]:  Olivia Solon, “Why did Microsoft fund an Israeli firm that surveils West Bank Palestinians?”, &lt;em&gt;NBC News,&lt;/em&gt; 28 October 2019.&lt;/p&gt;
&lt;p&gt;[^21]:  “Anyvision / Oosto – DIMSE.” n.d. &lt;a href=&quot;https://dimse.info/anyvision-oosto/&quot;&gt;https://dimse.info/anyvision-oosto/&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;[^22]:  Solon, “Why did Microsoft fund an Israeli firm that surveils West Bank Palestinians?”&lt;/p&gt;
&lt;p&gt;[^23]:  Antony Loewenstein, &lt;em&gt;The Palestine Laboratory: How Israel Exports the Technology of Occupation Around the World,&lt;/em&gt; 2024.&lt;/p&gt;
&lt;p&gt;[^24]:  Privacy International, &lt;em&gt;The Global Surveillance Industry&lt;/em&gt;, July 2016.&lt;/p&gt;
&lt;p&gt;[^25]:  Steven Feldstein and Brian Kot, &lt;em&gt;Why Does the Global Spyware Industry Continue to Thrive? Trends, Explanations, and Responses&lt;/em&gt;, 14 March 2023.&lt;/p&gt;
&lt;p&gt;[^26]:  Harry Davies and Yuval Abraham, “Revealed: Israeli Military Creating ChatGPT-like Tool Using Vast Collection of Palestinian Surveillance Data”, &lt;em&gt;The Guardian&lt;/em&gt;, 6 March 2025.&lt;/p&gt;
&lt;p&gt;[^27]:  Tariq Dana, “The Military-Industrial Backbone of Normalization”, 21 October 2025.&lt;/p&gt;
&lt;p&gt;[^28]:  Davies and Abraham, “‘A Million Calls an Hour’: Israel Relying on Microsoft Cloud for Expansive Surveillance of Palestinians”.&lt;/p&gt;
</content:encoded></item><item><title>The Technology Question Today</title><link>https://disjunctionsmag.com/articles/technology-question-today</link><guid isPermaLink="true">https://disjunctionsmag.com/articles/technology-question-today</guid><description>Introducing Disjunctions Magazine</description><pubDate>Thu, 11 Dec 2025 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;A sense of disorientation characterises our relationship to technology today. Everywhere we look, we find newspapers and magazines saturated with forecasts of how technology will shape society; governments are scrambling to assert national digital sovereignty; attempts to “quit social media” are a minor cultural phenomenon; and pop culture’s imaginary is overrun by visions of technological dystopia. In academic and intellectual circles, efforts to make sense of technocapitalism’s encroachment over all aspects of our lives have birthed an expansive inventory of frameworks — surveillance capitalism, data capitalism, technofeudalism, cognitive capitalism, &lt;em&gt;etc.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Such confusion is hardly surprising, given our near-total dependence on technological systems over which we have little to no control — from telecom networks and algorithmic marketplaces to dating apps and streaming platforms. Where we are powerless, tech firms today wield extraordinary influence. They undertake colossal infrastructural projects, including power grids, communication hubs,  and water-guzzling data centres. Downstream of these infrastructures, we find the tools of ubiquitous surveillance and, increasingly, the weapons of war. Simultaneously, these corporations have assumed a central position in global markets through their ownership of vast pools of assets, both material and immaterial.[^1] With nine of the world’s ten largest corporations belonging to the tech sector, they play a decisive role in coordinating the global economy.&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;Technology has, of course, always been essential to the capitalist exigencies of increasing productivity and maintaining the rate of profit. Whether by enabling the mechanisation of labour or by allowing tighter surveillance and control over the workforce, it has given capitalists the tools to squeeze more out of workers.[^2] Further outside the office and the factory, communication technologies have also been crucial to the commodification of culture, as the advertising and entertainment industries fold more and more of society into the circuits of capital.[^3] By manufacturing desires, shaping perceptions, and distracting away from dissent, they work to secure capital’s dominance over the social sphere, smoothening the production and circulation of commodities.&lt;/p&gt;
&lt;p&gt;These accounts of technology as an instrument of domination both inside and outside the workplace are largely complementary. Both rest on the idea that technological systems developed within capitalist social relations tend to reinforce those relations almost by design. Yet, it is inadequate to stop here, with a vision of technological capitalism as an all-powerful, closed system — an iron cage. Such an analysis would mean that any form of opposition or resistance is ultimately futile, dovetailing neatly with the narrative that the ruling elite have long sought to establish — that &lt;em&gt;there is no alternative&lt;/em&gt;. Without the ability to imagine an existence beyond capitalism and a credible political horizon to coalesce around, we remain fragmented, each engaging in ineffectual forms of dissent. Whether we turn to scepticism, withdrawal, or to purely abstract critique, we remain unable to seize or build upon the moments when radical political contestation becomes possible.&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;Any effective emancipatory project, insofar as the technology question is concerned, must address the following three problems.&lt;/p&gt;
&lt;p&gt;First, it must emphasise the need for a materialist analysis of the contemporary technological landscape. Certainly, it is difficult to exaggerate the extent to which the infrastructures used to propagate information, produce knowledge, and shape worldviews are steered by the interests of a small cadre of capitalists. Simultaneously, workers are rendered the objects of monitoring systems that funnel data into algorithmic management processes, mirroring the closed feedback loops imagined by early cyberneticians. Worse yet, surplus populations — excluded from formal wage labour — are faced with even more violent techniques of surveillance, discipline, and outright elimination. Mainstream narratives around technology have been adept at masking these realities, peddling technofuturist utopias that only billionaire visionaries — the sole custodians of humanity’s destiny — can deliver.[^4] Analysis that breaks through these layers of discursive obfuscation is crucial for understanding the balance of power at different nodes in this landscape and building political projects capable of articulating a coherent transformative vision.&lt;/p&gt;
&lt;p&gt;Second, this analysis cannot be divorced from the movements and struggles already unfolding around us. These include worker organising within the tech industry, such as unionisation drives, and attempts to campaign against aiding Israel’s genocide in Gaza. They also include broader social movements, such as those mobilising against data centre expansions, and against the deportation/border control infrastructure constructed by ICE and Palantir. It is critical to re-think theory with and through these movements — to understand the fracture-points that their actions expose, and to harness the libidinal energy that animates them towards increasingly radical ends.&lt;/p&gt;
&lt;p&gt;Finally, it is crucial to adopt an internationalist worldview. Capitalism is a global system, with its origins rooted in what Marx called &lt;em&gt;primitive accumulation&lt;/em&gt; — the imperialist plunder of the periphery. This remains true in the contemporary economy, the digital manifestations of which are propped up by rigidly disciplined workforces in China and India, and the extraction of critical minerals in the Congo and Latin America.[^5] As such, we cannot pretend that technocapitalist hegemony can be seriously contested within the confines of the nation-state. Nor can we pretend that there is a single idealised worker, capable of seizing the means of production and bringing the system to an end. To organise ourselves in this conjuncture, our points of departure must be multiple. We must strive to synthesise convergences between the causes of workers around the world, struggles around social reproduction, and the plight of so-called surplus populations.[^6] In doing so, we must keep the specificity of each movement in sight and reckon with the historic failures of internationalist solidarity.&lt;/p&gt;
&lt;p&gt;Taken together, these concerns shape &lt;em&gt;Disjunctions&lt;/em&gt; as a space for a rigorous critique of technology committed to emancipatory ends. &lt;em&gt;Disjunctions&lt;/em&gt; will serve as a home for theoretically rich analyses of technocapitalism’s many contingencies; for studies of concrete manifestations of technocapitalist power; for reckonings with the resistances that emerge in response; and for finding common ground and building alliances between struggles around the world.&lt;/p&gt;
&lt;p&gt;[^1]:  These increasingly include immaterial assets, such as data and intellectual property. See: Cecilia Rikap, &lt;em&gt;Capitalism, Power and Innovation: Intellectual Monopoly Capitalism Uncovered&lt;/em&gt;, 2021. They also include large financial portfolios. See: Fernandez et al., “The Financialization of Big Tech”, &lt;em&gt;Stichting Onderzoek Multinationale&lt;/em&gt; &lt;em&gt;Ondernemingen,&lt;/em&gt; 2020.&lt;/p&gt;
&lt;p&gt;[^2]:  A rich body of literature has focused on this. See: Harry Braverman, &lt;em&gt;Labor and Monopoly Capital,&lt;/em&gt; 1974; David F. Noble, &lt;em&gt;Forces of Production: A Social History of Industrial Automation&lt;/em&gt;, 1984.&lt;/p&gt;
&lt;p&gt;[^3]:  This analysis is often associated with the Frankfurt School. See: Max Horkheimer &amp;amp; Theodor W. Adorno, &lt;em&gt;Dialectic of Enlightenment&lt;/em&gt;, 1947. Analogous arguments were later made by theorists in the Italian &lt;em&gt;operaismo&lt;/em&gt; tradition. See: Mario Tronti, &lt;em&gt;Factory and Society,&lt;/em&gt; 1962.&lt;/p&gt;
&lt;p&gt;[^4]:  One of the clearest statements of this position can be found in &lt;em&gt;The Techno-Optimist Manifesto&lt;/em&gt;, written by venture capitalist Marc Andreessen (of Andreessen Horowitz).&lt;/p&gt;
&lt;p&gt;[^5]:  Nick Dyer-Witheford, &lt;em&gt;Cyber-Proletariat: Global Labour in the Digital Vortex&lt;/em&gt;, 2015; Christian Fuchs, &lt;em&gt;Digital Labour and Karl Marx,&lt;/em&gt; 2014.&lt;/p&gt;
&lt;p&gt;[^6]:  For a recent attempt at such a synthesis, see: Nancy Fraser, “Behind Marx’s Hidden Abode”, &lt;em&gt;New Left Review,&lt;/em&gt; March–April 2014.&lt;/p&gt;
</content:encoded></item></channel></rss>