One striking feature of contemporary “artificial intelligence”, whatever that might mean today, is that it is somehow nothing and everything at the same time.1 On the one hand, it demonstrates impressive feats of aggregation2 and compression;3 on the other, it fails spectacularly at tasks of both logic (expected, given the technical structure of large language models) and recall (underexpected, because wasn’t this one of the defining “advances” made by AI?)4 The result, then, is a temporal distention, where generative AI in 2025 is kinda already here but also always only really arriving in the future — either via improvements or via the holy grail of AGI. The latter part of this temporality can be understood as a scalar movement through the speculative, where the next-word prediction (of an LLM) and the next-moment prediction (of financial investments) find themselves entangled in the look to the future: what may happen? What may be wanted?5 An important part, however, of this speculative dimension is one that is embodied in the user: the subject par excellence of the service economy.6 If all of us are users — and if this has somehow superseded our being citizens or workers (big if) — then perhaps it is worth asking what kind of user this particular user (of generative AI) is.
I want to argue here that this user is a beta: a tester, first and foremost, even before they get to be a consumer. Further, this “being beta” marks a specific, testy relationship between citizens of the global North and the matrices of extraction that they are caught in today.7
It might be evident to some readers how we are all testers today (some of us more than others, of course). As tech companies fail to generate the profits they desire from infrastructurally expensive chatbots, cheap VC-funded chats with LLMs end up underscoring how generative AI is truly a solution in search of a problem.8 Each interaction with an LLM, then — be it awe-inducing or underwhelming or somewhere in between — is an attempt to automate (part of) some other labour pipeline. In doing so, it also becomes an attempt to gather more data on said pipeline, while the whole world throws the kitchen sink at a piece of technology that has a very specific modality — one that masquerades as an all-purpose suggestion of a possibly useful tool.9 It is true that there is often some alpha testing that takes place inside company/lab offices before a new model is released; but by and large, it is clear that the mantle of beta testing now falls squarely upon all of us.10 This has a long recent history: companies once outsourced private R&D and the development of intellectual property to the much cheaper, subsidised public education system;11 they outsourced the labour of writing code to the global South;12 and they crowdsourced insights and data collection to all of us, and to our sociality by extension, making us complicit in the very production of the machines that we use.13 Today, they outsource not just the personal, cultural, and social implications of their product (the so-called alignment), but also the very product-ness of their product (what it is for, what it does, what it cannot do) to millions of beta testers today.14 If we cannot rely upon the given product to do what we want it to do (or hope for it to do, or delude ourselves into thinking it can do), then it contributes to the general precariousness of our existence under late capitalism, while also reminding us how some technologies were never tools — and perhaps could never be(come) tools.15
The subjective move from being a user to being a beta tester/user does not just tell a straightforward tale of expanding work-as-precarity, but also signifies the present-day social relationships between technology and extraction writ large.16 At the heart of this move is how extraction is enabled across distributions of time and space. Two simple terms-as-models can help elucidate what is not being straightforwardly discussed here. The first is extraction (of resources, as surplus value or via explicit violence) from one part of the world (say South) to another (say North); the second is accumulation (of resources, such as by means of enclosure), and the entrenchment of a given state of materialities. What is at stake here is instead the act of imagination of an operation — what even is going on when a user meets the so-called used product? — that is being extracted and accumulated.
To refract via some conventional Marxist frameworks, the user is being asked to consider possible reifications, and to hand over the blueprints for the same to the capitalist.17 The user sits down at the screen and thinks step-by-step with the machine, a common prompt-interface modality, and throws possible use-cases at the model — tracking the efficacy of the fledgling cyber-homunculus,18 guiding and coaching it into usability, into getting better at tasks that will one day, in the future, be automated.19 Perhaps the real agentic AI was the users we agent-ified all along.
Allow me to ruminate on the beta-ness of it all. On the one hand, as a compulsory beta tester, every user makes deals with the incompleteness and the ongoing-ness of the state of affairs around us. If something is broken or wrong at the moment, it is because it will be fixed in the future (by more of the same).20 Even if a promise was flawed, in the moment of interaction, maybe it’s because we — the betas21 — didn’t try hard enough?22
On the other hand, a beta tester/user also emerges as a beta, the perpetually embarrassed loser in the folk pseudo-socio-psychology that is the wolf-pack nonsense. A beta, in this sense, is someone not atop the pecking order; a static mirage observed by some biologists, who themselves now realise how familial solidarities are a more complicated affair in animals and humans alike. But our beta user never got that memo from ChatGPT. He — and now it is a he — still wants to be an alpha; and a sigma grindset, leading him to higher productivity through automation and assistance, is his way to get there. But in this desire, too, he remains a beta, with LLMs undercutting his productivity, while at the same time, increasing the speculative productivity of a future market thoroughly infused with GAI or AGI. Like all stupid pop-psych garbage, the beta as a marker remains typologised unfairly, but in a way that structurally prohibits the move out of the stereotype. Always already a beta, tells the structure to the beta user. And to think: this beta user is not even the real worker, who is elsewhere — in the global South, inside the factories of materiality.23
And yet, there is revenge in the offing. Alongside the secret sauce of the operationalisation of intellectual processes (or not-so-secret; in most cases, it is just tacit knowledge being articulated symbolically), the beta user ends up polluting the very well that holds his extracted insights.24 As is being demonstrated by the recent flagging results from bigger and newer LLMs — and the monster of scale could certainly never have been slain so easily — the very fact of knowledge extraction comes fully equipped with its own dialectical movement: the extraction of ignorance.25 In this global monkeys-on-a-typewriter experiment, OpenAI (and Google, and Meta, and so on) expect an eventual convergence between experts and expert tasks; in other words, the companies assume and hope that if enough experts train our systems for long enough, our systems will one day exhibit the same expertise.26 But simply because most of what we do online, or on computational media writ large, is stuff that we have little clue about, the beta users end up conveying even more of wrong-ness and doesn’t-work-that-way-ness than of what is actually right, or of how to do or evaluate something. In this regard, the “we do not use student data” move by AI companies should not be read solely as a legal arse-covering (even though it is one: several legal frameworks have strict codes about what information can and cannot be shared outside a prescribed educational environment). It is concurrently also an attempt to channel away the worst pollutants of this future well of (reification) wisdom: the students who are clearly still learning, and often simultaneously trying not to learn, as all learning is, by definition. By clearly marking such interactions separately, corporations hope to rejuvenate the dying model-cycle, which was already showing clear signs of decay — either by disease (of existence), or by knowledge pollution, or through the antinomies of synthetic data.27 If not the final, then perhaps the penultimate laugh — a maniacal laugh — is the beta user’s; a moment of latent sigma-fication.
The true sigma move, in this (mildly) new set of social relations, then, I argue, is to be as stupid as is humanly possible. I promise to do my bit. Will you do yours?
Notes
-
Lucy Suchman, “The Uncontroversial ‘Thingness’ of AI”, Big Data & Society, 2023. [^]
-
Fernando van der Vlist et al., “The Political Economy of AI as Platform: Infrastructures, Power and the AI Industry”, AoIR Selected Papers of Internet Research, 2024. See also: Dieuwertje Luitse, “Platform Power in AI: The Evolution of Cloud Infrastructures in the Political Economy of Artificial Intelligence”, Internet Policy Review, 2024. [^]
-
Ted Chiang, “ChatGPT Is a Blurry JPEG of the Web”, The New Yorker, 9 February 2023; Hito Steyerl, “Mean Images”, New Left Review, 2023. [^]
-
For a discussion of LLMs and logic-oriented tasks see: Ranjodh Singh Dhaliwal, “A Few Notes on the Scalar Foundations of Foundation Models”, Cambridge Forum on AI: Culture and Society, 2025. For a treatment of recall and retrieval see: Yunfan Gao et al., Retrieval-Augmented Generation for Large Language Models: A Survey, arXiv, 2024. [^]
-
uncertain commons, Speculate This!, 2013; Sun-ha Hong, “Prediction as Extraction of Discretion”, Big Data & Society, 2023; Sun‐ha Hong, “PREDICTIONS WITHOUT FUTURES*”, History and Theory, 2022. [^]
-
Tung-Hui Hu, A Prehistory of the Cloud, 2016; Edoardo Biscossi, The User and the Used: Platform Mediation, Labour and Pragmatics in the Gig Economy, 2022; Markus Krajewski, trans. Ilinca Iurascu, The Server: A Media History from the Present to the Baroque, 2018; Ranjodh Singh Dhaliwal, “The Cyber-Homunculus: On Race and Labor in Plans for Computation”, Configurations, 2022; Christian Ulrik Andersen and Søren Bro Pold, “The User as a Character, Narratives of Datafied Platforms”, Computational Culture, 2021; Jones, Matthew L. “Users Gone Astray: Spreadsheet Charts, Junky Graphics, and Statistical Knowledge”, Osiris, 2023. Polina Kolozaridi, “Unstable Users: Coordinating the Configuration of Digital Objects and Projects”, Technology and Language, 2025; Kushner, Scott. “The Instrumentalised User: Human, Computer, System”, Internet Histories, 2021; Joanne McNeil, Lurking: How a Person Became a User, 2019. [^]
-
Ranjodh Singh Dhaliwal, “Organic Division of Labor — Ergonomics/Cybernetics of Labor — Inorganic Division of Labor”. In Zach Blas et al., eds. Informatics of Domination, 2025. [^]
-
As I note elsewhere in my work, only two proper problems seem to have been found until now: the drudgery that is educational output for metrics-based credentialing (cheating at the school/college level), and global loneliness (i.e. the rapid disappearance of sociality, and its replacement with networked intimacies). See Brian Merchant, “AI Generated Business: The Rise of AGI and the Rush to Find a Working Revenue Model”, AI Now Institute, 2024; Ranjodh Singh Dhaliwal, “Generating an Artificial Democracy: On Sociological Intimacies of Bots and/as Personas”, transmediale, 2025. [^]
-
Ranjodh Singh Dhaliwal, “The Infrastructural Unconscious: Do Computers Dream of Carbo-Silico Pipelines?”. In Bernhard Siegert and Benedikt Merkle, eds. Reckoning with Everything; Ranjodh Singh Dhaliwal, “Concretion.: (Noun, ?1541 AD - Now)”, Basel Media Culture and Cultural Techniques Working Papers, 2025. [^]
-
In a conventional software cycle, this is a phase where bugs are ironed out of fully functional software through private/public testing. See: Geoff Duncan, “Waiting with Beta’d Breath - TidBITS”, TidBITS, 1996. [^]
-
Philip Mirowski, Science-Mart: Privatizing American Science, 2011; Matthew Kirschenbaum and Rita Raley, “AI and the University as a Service”, Publications of the Modern Language Association of America, 2024; Jacob H. Rooksby, The Branding of the American Mind: How Universities Capture, Manage, and Monetize Intellectual Property and Why It Matters, 2016. Closely related to this notion of training in the educational and work-experience sense of social reproduction is, of course, the training data (needed for generating generative AI) and the training of generative AI (that happens during reinforcement learning or after a model has been released to the public). [^]
-
Sareeta Amrute, Encoding Race, Encoding Class: Indian IT Workers in Berlin, 2016; Héctor Beltrán, Code Work: Hacking across the US/México Techno-Borderlands. In Daniela Rivero, ed. Princeton Studies in Culture and Technology, 2023. [^]
-
Tiziana Terranova, Network Culture: Politics for the Information Age, 2010; Tiziana Terranova, “Technoliberalism and the Network Social”, Theory, Culture & Society, 2024; Tiziana Terranova, After the Internet: Digital Networks between Capital and the Common, Semiotext(e) Intervention Series, 2022; Tiziana Terranova, “Free Labor”, Social Text, 2000. [^]
-
Katia Schwerzmann and Alexander Campolo, “‘Desired Behaviors’: Alignment and the Emergence of a Machine Learning Ethics”, AI & Society, 2025. [^]
-
Tools can be understood, in this context, as implements which have a straightforward user/used distinction, while technology as a system complicates it. For more, see Ranjodh Singh Dhaliwal and Bernhard Siegert, “Knowing, Studying, Writing: A Conversation on History, Practice, and Other Doings with Technics”, in Nicholas Baer and Annie Oever, eds., Technics: Media in the Digital Age, 2024; Ranjodh Singh Dhaliwal, “What Do We Critique When We Critique Technology?”, American Literature, 2023. [^]
-
Aaron Benanav, Automation and the Future of Work, 2020. [^]
-
Dhaliwal, “The Cyber-Homunculus”; Timothy Bewes, Reification, or, The Anxiety of Late Capitalism, 2022; Fredric Jameson, “Reification and Utopia in Mass Culture”, Social Text, 1979. [^]
-
Or a “clanker”, if you have a different sense of civility than me: https://knowyourmeme.com/memes/clanker. [^]
-
Fabian Offert and Ranjodh Singh Dhaliwal, “The Method of Critical AI Studies, A Propaedeutic”, arXiv, 2024; Ben Grosser and Søren Bro Pold, “Reading the Praise/Prompt Machine: An Interface Criticism Approach to ChatGPT”, Proceedings of the Sixth Decennial Aarhus Conference: Computing X Crisis, 2025; Sarah Burkhardt and Bernhard Rieder, “Foundation Models Are Platform Models: Prompting and the Political Economy of AI”, Big Data & Society, 2024. [^]
-
Théo Lepage-Richer, “Adversariality in Machine Learning Systems: On Neural Networks and the Limits of Knowledge”. In Jonathan Roberge and Michael Castelle, eds. The Cultural Life of Machine Learning: An Incursion into Critical AI Studies, 2021; Ranjodh Singh Dhaliwal et al., Neural Networks, In Search of Media, 2024. [^]
-
In Hindustani, as in some other Indic languages, “beta” means “son” — indexing a paternalization inherent to being a beta. [^]
-
John Naughton, “Did AI Mania Rush Apple into Making a Rare Misstep with Siri?”, The Guardian, 22 March 2025. [^]
-
Karen Hao, Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI, 2025; Paola Tubaro et al., “The Trainer, the Verifier, the Imitator: Three Ways in Which Human Platform Workers Support Artificial Intelligence”, Big Data & Society, 2020. [^]
-
Matteo Pasquinelli, The Eye of the Master: A Social History of Artificial Intelligence, 2023; Hannes Bajohr, ed. Thinking with AI: Machine Learning the Humanities, 2025; Leif Weatherby, Language Machines: Cultural AI and the End of Remainder Humanism. Posthumanities, 2025. [^]
-
Jared Kaplan et al., “Scaling Laws for Neural Language Models”, arXiv, 2020; Ethan Caballero et al., Broken Neural Scaling Laws, arXiv, 2022. [^]
-
See Brian Merchant’s excellent reporting on job losses, and on certain industries using workers (who are soon to be laid off) to make their AI slop look less sloppy. See also: Roland Meyer, “‘Platform Realism’. AI Image Synthesis and the Rise of Generic Visual Content”, Transbordeur: photographie histoire société, 2025. [^]
-
Felicia Jing et al., “On Emplotment: Phantom Islands, Synthetic Data, and the Coloniality of Simulated Algorithmic Space”, Social Text, 2026; Benjamin N. Jacobsen, “Machine Learning, Synthetic Data, and the Politics of Difference”, Theory, Culture & Society, 2025; Shane Denson, “On the Very Idea of a (Synthetic) Conceptual Scheme”, Philosophy & Digitality, 2025; David M. Berry, “Synthetic Media and Computational Capitalism: Towards a Critical Theory of Artificial Intelligence”, AI & Society, 2025. [^]
