After spending almost two decades in big tech, I was notified last month that I am being laid off. There have been massive waves of layoffs across the industry recently, and I am just one of the many tens of thousands of tech workers impacted.1 Nevertheless, the news marked a moment of great personal change for me, as it prompted me to finally gather the courage to make a decision I had been putting off for years. I am leaving Big Tech.
I will no longer be pursuing any job opportunities in Big Tech or Silicon Valley-type startups. This is not a decision that I am making lightly. In fact, the intention to leave Big Tech has been constantly on my mind for the last several years. I extensively debated whether to publicise my decision, and finally convinced myself that it is important that I do. Conversations with friends, colleagues, and collaborators over the years have led me to believe that I am not alone in wrestling with this.
Why am I leaving Big Tech? There are several reasons. While I list a few below, I believe they stem from the same underlying structural problem: an unprecedented concentration of power in the hands of those in Big Tech who want to deliberately enact (or, at least, are incapable of imagining anything other than) a techno-fascist future. I believe that is the root cause of the momentous cultural and material changes we are witnessing across the industry.
Israel is committing a genocide in Gaza against the Palestinian people, one of the worst atrocities of our times. These deaths are a result of mass bombings, weaponised starvation, destruction of civilian infrastructure, attacks on healthcare workers and aid-seekers, and forced displacement. Big Tech corporations have not only played a pivotal role in materially supporting and profiting from this ongoing genocide over the last two and a half years, but have also ruthlessly silenced any dissenting voices amongst their workers.2
Years ago, I learned about the infamous history of how IBM, once the Big Tech institution of its day, had provided key technological support for the Holocaust committed by Nazi Germany against the Jewish people. How naïve I was to wonder how that could have happened; never, even in my wildest nightmares, did I imagine it would become the defining technological story of our generation.3
A decade ago, just as I was starting my PhD in information retrieval (IR), I was part of an early cohort of researchers who saw significant potential in deep learning methods for IR tasks. I co-organised the first neural IR workshop at SIGIR, co-authored a book on the topic, co-developed the MS MARCO benchmark, and co-founded the TREC Deep Learning Track. Last year, I was awarded the ACM SIGIR Early Career Researcher Award for my research on neural IR. I mention this not to brag, but as evidence of the genuine excitement I have felt over the years regarding the scientific progress in machine learning that I have both witnessed and contributed to. But today, I am deeply disconcerted by the state of AI discourse.
The hype itself is not a new phenomenon. Even as I was starting out in the field, I did not care much for the sudden rebranding of neural networks as “deep learning”. In fact, in much of my early work, I continued to use the phrase “neural IR” (shortening it to “neu-ir” to sound like “new IR”) over “deep learning for IR” and other such monikers. But the hype around “AI” has taken a much more menacing turn. It has turned into something akin to a religious cult and a project of empire building that is uncompromising in its opposition to critique. Tech companies are mandating that all teams embed large language models into every feature of every product and into their own daily workflows. Whether they are actually useful or not is completely beside the point. Why? Because the evidence-free promises of AI utopia that tech “leaders” are so boldly prophesying are remarkably effective at making stock prices soar. No, AI will not be a “new digital species” (however much you try to anthropomorphise next-token prediction algorithms), nor will it be a wand that magically solves climate change or war or any of our other problems. But the grand fictitious narratives about AI, both the hype and the fearmongering, will continue to bolster claims of their “foundational” advancements, creating the conditions to commodify labor, renegotiate down worker compensation, and provide political cover for further dismantling of our social services. This will result in the largest ever accumulation of power and wealth in the hands of a diminishing few, while the legitimate needs of the people, from healthcare to education, are met with “let them eat chatbots”. That is the intent and why AI is a project of class domination.
This is not to say that technologies like language models cannot be useful. As a researcher, I am genuinely excited by their potential to enable more accessible forms of knowledge production. Yet technological artefacts cannot be separated from the conditions under which they are created, or from the realities of who controls and profits from them. Today, developing these technologies expands racial capitalism, intensifies imperialist extraction, and reinforces the divide between the global North and South. The technology is inseparable from the labour that produces it — the expropriation of work by writers, artists, programmers, and peer-production communities, as well as the highly exploitative crowdwork of data annotation.
As an IR researcher, I am particularly alarmed by the uncritical adoption of these technologies in information access, which has been a focus of my own research.4 I am concerned that institutions with access to vast troves of behavioural data, when combined with generative AI’s capacity to produce persuasive language and imagery, will enable large-scale manipulation of public opinion. These tools may appear no more sinister than today’s conversational information systems, or take more explicit forms in the future, such as generative advertising. Imagine a world in which every online search or interaction with a digital assistant delivers information optimised to subtly influence your consumer preferences or political beliefs.
I harbour respect for those in the industry who are undertaking critical work on how AI can be genuinely useful to society. However, I am also tremendously concerned by the shrinking power of those critical voices. Those who do such work do so under incredible pressure and with tremendous risks to their careers.5 The boundaries of what you are allowed to critique are rapidly shrinking. You are allowed (for now) to get on a pulpit and talk about fairness and representational harms (don’t get me wrong, those are very important!) as long as it paints the corporations as “responsible institutions trying to do the right thing for society”. But you’re never allowed to criticise the corporations, especially if it conflicts in any way with profitability. The bad actors in your threat models must always be external to the corporations (and their owners). Never criticise the concentration of wealth and power in the hands of a few. And, definitely, never talk about the military-AI complex.6
The result is the securitisation of AI discourse, which today is often framed as “AI safety”, selectively omitting questions of social justice. When so-called Responsible AI or AI ethics is defined in ways that avoid confronting exploitation, war, colonial extraction, gendered and sexual violence, and other systems of oppression, then what are we even trying to do as a community?
I don’t want to sound blasé, but getting laid off may have been the best thing to happen to me last year. I don’t wish to minimise how difficult it is to be on the receiving end of such news, and I am well aware of my privilege, having permanent residence status in Canada and sufficient short-term financial stability. I don’t wish this on anyone, and my heart goes out to everyone who has been similarly impacted by the recent layoffs. If you have been affected and would like to talk, please reach out! But in my personal context, this sincerely feels like a blessing in disguise. It took me a while to acknowledge it, but every passing day since I got the news, I have genuinely felt more excited about the future.
Over the years, I have had the immense privilege of working with many incredibly kind and thoughtful people who mentored, collaborated with, and shaped me as a researcher and as a person. I am filled with utmost gratitude to all of you, and I hope our paths will continue to cross!
And as I look to the future, I am both excited and nervous. I want to spend more time reading and engaging with critical scholarship.7 I want to spend more time in movement spaces. I want to find people who are thinking about alternatives to Big Tech and fighting back against the global slide into techno-fascism. I want to continue working on information access and reimagine very different futures for how we, as individuals and as society, experience information.8 I want to explore spaces where I can conduct research explicitly grounded in humanistic, anti-capitalist and anti-colonial values. I want to continue my work on emancipatory information access and realise my research as part of my emancipatory praxis.9 And above all, I want to build technology that humanises us, connects us, liberates us, and gives us joy.
Another world is not only possible, she is on her way. On a quiet day, I can hear her breathing.
— Arundhati Roy
Abolish Big Tech. Free Palestine.
Notes
-
Kate Park, Cody Corrall and Alyssa Stringer, “A Comprehensive List of 2025 Tech Layoffs”, TechCrunch, 22 December 22 2025. https://techcrunch.com/2025/12/22/tech-layoffs-2025-list/. [^]
-
Noa Yachot, “‘Data Is Control’: What We Learned From a Year Investigating the Israeli Military’s Ties to Big Tech”, The Guardian, 30 December 2025; Marwa Fatafta, “Big Tech and the Risk of Genocide in Gaza: What Are Companies Doing?”, Access Now, 11 October 2024; Federica Marsi, “UN Report Lists Companies Complicit in Israel’s ‘Genocide’: Who Are They?”, Al Jazeera, 1 July 2025; Naomi Nix, Nitasha Tiku and Trisha Thadani, “Big Tech Takes a Harder Line Against Worker Activism, Political Dissent”, The Washington Post, 19 May 2025. [^]
-
Oliver Burkeman, “IBM ‘Dealt Directly With Holocaust Organisers’”, The Guardian, 1 April 2002. [^]
-
Bhaskar Mitra, Henriette Cramer and Olya Gurevich, “Sociotechnical implications of generative artificial intelligence for information access”. In Ryen W. White & Chirag Shah, eds. Information Access in the Era of Generative AI, 2024. [^]
-
Gerrit De Vynck and Will Oremus, “As AI booms, tech firms are laying off their ethicists”, The Washington Post, 3 April 2023. https://www.washingtonpost.com/technology/2023/03/30/tech-companies-cut-ai-ethics/. [^]
-
Brian J. Chen, Tina M. Park and Alex Pasternack, “Booming Military Spending on AI Is a Windfall for Tech—and a Blow to Democracy”, Tech Policy Press, 20 June 2025; Ioannis Kalpouzos, “Killer Robots and the Fetish of Automation”, Jacobin, 3 January 2026. [^]
-
“What Am I Reading?”, https://bhaskar-mitra.github.io/reading/. [^]
-
Bhaskar Mitra, “Search and Society: Reimagining Information Access for Radical Futures”, Information Retrieval Research Journal (IRRJ), 2025. [^]
-
Bhaskar Mitra, “Emancipatory Information Retrieval”, Information Retrieval Research Journal (IRRJ), 2025. [^]
