Asymmetric Image Wars Or, how I learnt to stop worrying and love the slop
Rakesh Sengupta apr 2026 · essay
LEGO-style propaganda video. Source: Youtube.

The disappearance of the individual subject, along with its formal consequence, the increasing unavailability of the personal style, engender the well-nigh universal practice today of what may be called pastiche. — Frederic Jameson, Postmodernism, or, The Cultural Logic of Late Capitalism (1991)
War can never break free from the magical spectacle because its very purpose is to produce that spectacle… There is no war, then, without representation, no sophisticated weaponry without psychological mystification. Weapons are tools not just of destruction but also of perception. — Paul Virilio, War and Cinema (1986)

Usually, when an unsuspecting social media user encounters AI-generated imagery in their increasingly contaminated feeds, the response is one of immediate, abject revulsion. It is a digital gag reflex through vomit emojis, a dystopic calculation of the implied energy and water footprint, and a creeping sense of having witnessed not merely a synthetic image, but the death of human culture itself. This visceral response is not misplaced under late, late capitalism. Fredric Jameson famously diagnosed “pastiche” as the cultural symptom of the postmodernist disappearance of individual subjectivity and style — leaving behind only the hollow imitation of dead forms.1 The AI image is arguably computational pastiche — or, in the vernacular of the internet, slop — saturated to its logical endpoint, as style and subjectivity are not merely decentered but statistically dissolved. The revulsion towards such images only intensifies when they originate from fascist quarters, as witnessed in the outrage against Donald Trump’s diabolical “Gaza Riviera” video last year, which trivialised the tragedy of an ongoing genocide through the tastelessness of real-estate speculation.2

However, we now find ourselves amidst a curious inversion of affect, where computational pastiche seems to have found its parodic potential — something Jameson argued pastiche could never do. A recent wave of AI-generated counterpropaganda videos depicting the U.S.-Israeli war on Iran has captured the anti-war, anti-imperialist imagination in ways that no prior synthetic images have managed. Most prominent among them are the blocky LEGO-style animations, in which plastic caricatures of Trump and Netanyahu peruse the Epstein files, attack schoolchildren in Iran, and are bombarded in retaliation by Iranian missiles — all set to a catchy AI-generated rap soundtrack.

The theoretical temptation to read these AI videos through Jameson’s understanding of pastiche as simply the “imitation of a peculiar or unique, idiosyncratic style” is understandable.3 But such a generalised reading would obscure the specific political context of pastiche circulating now as counterpropaganda — less as the terminal stage of postmodern aesthetic exhaustion than as a strategic redeployment of pastiche’s formal logic in the service of overt parody. Even amongst critics of generative AI, therefore, these parody videos have been shared and celebrated with collective catharsis: a catharsis that testifies to an overwhelming fatigue with the relentless, one-sided narratives mainstreamed by Western media and by Hollywood.

As David Robb documents in Operation Hollywood, the Pentagon has for decades operated a formal script-approval system through which access to military hardware worth billions of dollars is exchanged for editorial control over how the armed forces are portrayed, with liaison officers describing favoured productions as a “commercial” for them.4 The consequence of this, as Carl Boggs and Tom Pollard argue in The Hollywood War Machine, is a cinema structurally integrated into a “culture of militarism” — one that has consistently glamourised imperial violence, from the WWII “good war” genre to the post-9/11 blockbuster that deploys star actors, soaring soundtracks, and technological maximalism to legitimise the warfare state.5 Hollywood war filmmaking remains amongst the most capital-intensive genres in the industry. Its sensory overload serves its ideological function, with stars functioning less as artists and more as props for imperial soft power.

In War and Cinema, Paul Virilio famously argued that modern warfare is inseparable from cinematic technique, as both rely on what he called the “logistics of perception”.6 Weapons, for Virilio, are technologies not just of destruction but of perception, and war cannot break free from the magical spectacle because its very purpose is to manage images and deceive the enemy. For most of the twentieth century, that spectacle was largely a monopoly of the West — industrialised through Hollywood’s alliance with the Pentagon, into an unchallenged ideological machine. It is these asymmetric image wars (or AI wars, if the pun holds) that the counterpropaganda videos emerging from China and Iran have begun to contest in real time. The Western monopoly over death and destruction may remain intact, but its hold over the logistics of perception is increasingly being challenged by a rival storytelling stack. In Iran, this media war has been strategically organised over the past decade by social media-savvy teams of IRGC-aligned young creators, who craft and circulate more relatable messages for global audiences.7 We are witnessing a shift in these image wars through the dialectic of slop and spectacle — of pastiche and propaganda — that now operates between Hollywood’s painstaking perfection and the barefaced syntheticity of AI videos.


Take, for instance, White Eagle vs. Persian Cat, an AI-generated short film released by Chinese state media last month that rapidly drew millions of views across platforms.8 Produced and distributed via official state media channels, including China Central Television (CCTV), the film deploys the wuxia aesthetic to frame geopolitical conflict as a martial arts epic, replete with flying swordsmen and gravity-defying stunts, rendered in the hyperkinetic visual style of fantasy film. The nonhuman allegory is sophisticated in its materialist critique. The White Eagle, draped in stars-and-stripes regalia, represents U.S. imperial overreach; the Persian Cat, a whiskered warrior drawing on the agility and cunning of feline movement, stands for Iranian resistance. Much of the action unfolds in the Golden Flow Valley, a strategic bottleneck through which flows “black iron essence” — an unmistakable metaphor for oil. The visual relief here emerges from the reterritorialisation of spectacle, as we watch the elaborate, capital-intensive machinery of CGI being turned against U.S. imperialism. After the rapid allegorical relay of recent events, from the assassination of Ali Khamenei to the blockade of the Strait of Hormuz, the film finally ends with the implicit economic vision of de-dollarisation and a post-hegemonic imaginary in which trade is rerouted through alternative corridors of multipolar alignment.

The video’s viral circulation is evidence of a growing appetite for alternative narratives that refuse the contents and conventions of the Western military-entertainment complex. Scholars have coined the term slopaganda to describe AI-generated content that combines the “mass personalisation” of recommendation systems with propaganda’s goal of influencing the “decision-making capacities of groups” at unprecedented scale and speed.9 The coinage is timely but ideologically constrained, as its empirical examples run almost exclusively from Goebbels to Steve Bannon to Elon Musk. White Eagle vs. Persian Cat is slopaganda, technically speaking, but the concept does not quite account for the contexts in which generative AI has acquired a distinctive parodic potential against the very Western media apparatus the term was coined to describe. In this case, it is slopaganda with Chinese characteristics.

The compute economics underwriting this new logistics of perception have shifted both technically and geopolitically. Perhaps it is no coincidence, then, that OpenAI shut down Sora — its AI video-generation platform — in the same week the White Eagle vs. Persian Cat video was being widely circulated. Sora has reportedly burned through billions in inference costs, generating only a fraction of that in its total lifetime revenue. Such a catastrophic compute-revenue gap has forced OpenAI to not only abandon video generation entirely, but also to prematurely end its recent $1 billion IP-sharing partnership with Disney.10 The other major U.S. video generation model — Google’s Veo 3 — survives by gating its upper-tier version behind a $250/month plan, a far cry from Sora’s abortive business model as a social media platform where users could generate and share AI videos with a $20/month subscription.

In contrast, Chinese video generation models have shown more economic viability through their architectural efficiency and ecosystem integration, despite also operating at a loss. Kling 3.0, owned by Kuaishou, uses a 3D variational autoencoder architecture that compresses space and time together rather than processing frame by frame, simulating physical depth without the computational excess that made Sora’s diffusion transformer unsustainable.11 Another popular Chinese model — Seedance 2.0, developed by ByteDance — has narrowed its compute-revenue gap by embedding directly into CapCut’s editing pipeline, thereby integrating video generation into a platform that over a billion users already use daily. These models also benefit from China’s “Eastern Data, Western Computing” policy, which routes intensive computational workloads to low-cost data centres built in the country’s resource-rich western provinces.12 Underlying all of this is a structural advantage, where the Chinese state treats AI video less as a speculative consumer product and more as sovereign digital infrastructure, subsidising it accordingly.

Jurisdictions over intellectual property also differentiate Chinese video models from American ones. Earlier this year, when Seedance 2.0 users generated and circulated a hyper-realistic clip of Tom Cruise and Brad Pitt fighting on a rooftop. The Motion Picture Association condemned the model’s training as IP theft on a massive scale.13 Whether through deliberate strategy or regulatory indifference, these models effectively treated Hollywood films as a training commons. While OpenAI had to pursue expensive licensing deals (including its ill-fated billion-dollar partnership with Disney), Chinese firms operated with greater impunity, letting lawyers catch up later. Operating in a kind of safe harbour beyond the immediate reach of U.S. and European IP enforcement, these firms have effectively decommodified Western cultural assets. And rather than halting development in response to Hollywood’s complaints, they have introduced content filters in select international markets while maintaining more permissive models for domestic users.

My own fieldwork with Indian AI creators has revealed how Chinese AI video models like Kling and Seedance have quietly built a significant user base in India, where creators across the political spectrum prefer to use them because of their cheaper subscriptions and greater copyright latitude. The same tools are mobilised very differently depending on who is using them. Hindu nationalism’s digital foot soldiers use AI video models to generate religious, jingoistic, and Islamophobic content, while counterpublics use it to imagine alternative political and infrastructural futures outside the terms set by the state. What connects these content creators is a distributed relationship with these models, developed through repetition, workarounds, and the painstaking automation of workflows across multiple platforms. The most telling example of generative AI’s disruptive potential for countering state propaganda has come from Dhruv Rathee, one of India’s most prominent liberal critics of the right-wing Modi government. Rathee, who has been working as an AI entrepreneur of late, recently created an AI-generated spoof of Dhurandhar, a recent Bollywood propaganda blockbuster. This spoof, titled Bhawandar (“storm”), is a computational parody of the cinematic idiom through which xenophobic politics in India have been gaining cultural legitimacy.14


In the ongoing war waged by the United States and Israel, pro-Iranian digital content creators — navigating the murky space between grassroots meme warfare and state-aligned production — have generated similar counterpropaganda videos, almost certainly using Chinese video models. A prominent case is the Iranian student-run channel Explosive Media (Akhbar Enfejari), which has claimed independence from the state, though its LEGO-style AI videos have also been amplified by Iranian state media.15 Working in 24-hour production cycles, the team writes scripts and generates visuals using AI and digital editing tools, producing roughly two minutes of video per day. In one of these clips, blocky toy versions of Donald Trump and Benjamin Netanyahu launch missiles, alongside a character representing the Devil, with the Epstein files cited as the motivation for the attacks.16 The animation shifts to scenes of retaliatory Iranian missiles striking Tel Aviv and U.S. outposts in the Gulf, interspersed with toy soldiers returning in flag-draped caskets made of plastic blocks.

To vernacularise the aesthetics of a toy brand this way is not only to belittle the masculinist grammar of U.S. and Israeli military spectacle, but also to exploit the reach of Western intellectual property against the West itself. LEGO is amongst the most recognisable visual forms for a global audience raised on LEGO playsets, movies, video games, and so on. The Lego Group, a private Danish company, has had longstanding ties to Hollywood through film partnerships with studios like Universal Pictures and Warner Bros. Despite these connections, the company lacks the jurisdictional reach to meaningfully litigate against Iranian creators for infringing copyright, not least because Iran already operates under Western sanctions that restrict its integration into global financial systems.

IP constitutes the legal-economic architecture through which late capitalism circulates, imitates, and monetises culture. When Jameson famously argued that postmodernism cannibalises past styles through pastiche, he did not consider the late-capitalist enclosure of culture through IP regimes that criminalise unauthorised imitation. For Jameson, pastiche is “without any of parody’s ulterior motives, amputated of the satiric impulse, devoid of laughter.”17 However, as evidenced by their widespread circulation and celebration, the reception of LEGO-style AI videos is instead marked by cathartic laughter. They reintroduce the satiric impulse through their excess of fidelity to form, combined with their timely deployment in a context of asymmetrical information warfare. When Iranian creators generate videos referencing the Epstein files, depicting Trump and Netanyahu and Hegseth as LEGO figures killing civilians, they are engaging in a computational pastiche of IP itself — using the West’s own imitative visual culture against itself.

This mimicry also demonstrates a granular awareness of U.S. politics and visual culture, a striking contrast to U.S. propaganda describing Iran as belonging to the “stone age” or to warmongering U.S. politicians who can hardly locate Iran on a map. Iran’s AI portrayal of the United States as an imperialist, settler-colonial entity with paedophiles in power, therefore, operates through a subversion of the IP regime that controls the circulation of its vaunted images. It accelerates an implosion of pastiche, as the commodity logic of late capitalism begins to cannibalise its own legal superstructure. If the culture industry developed intellectual property to manage and monetise cultural production, these videos show how the commodity form has escaped those enclosures entirely under generative AI.

This crisis of IP inevitably extends to the simulatability of stardom. Despite decades of prognostications about the decline of stardom, a Hollywood actor remains the primary driver of global box office returns. But as Virilio describes, stars were always “inorganic individuals through an arbitrary selection of indefinitely reproducible common features.”18 As an example, he details how Marilyn Monroe was discovered by a US army photographer during the Korean War, and how her body was “at once expandable like a giant screen and capable of being folded and reproduced like a poster, a magazine cover or a centre-spread” — never connected to anything but its own reproducibility.19 Hollywood has historically managed this plasticity of the star’s image through contracts, exclusivity agreements, and the fiction of celebrity. In a computational twist, however, Hollywood stars can now be digitally reproduced through a basic prompt, their likenesses captured and simulated without consent. What Virilio identified as the expandable, foldable nature of the photographic star has accelerated into the promptable star — detachable from any original referent and statistically recombinable at will. This threatens not merely a celebrity’s ability to monetise their face, but the entire architecture of value extraction built around star exclusivity. Unsurprisingly, then, Hollywood groups have condemned Bytedance’s Seedance 2.0 for its ability to simulate the industry’s most bankable stars with unauthorised precision.

Circling back to Iranian counterpropaganda videos, a satirical AI film trailer depicting the ongoing war features Paul Giamatti as Netanyahu, Ian McKellen as Ali Khamenei, Jake Gyllenhaal as Mojtaba Khamenei, Liam Neeson as Trump, Zach Galifianakis as JD Vance, and Judi Dench as Keir Starmer.20 This three-minute geopolitical parody anticipates a Hollywood blockbuster told entirely from an oppositional gaze, using Hollywood’s own stars against its imperial narratives. If, as Richard Dyer argued, the star image condenses “contradictions within and between ideologies” into a seemingly coherent individual — ideology made flesh and given a human face — what generative AI has achieved here is to sever that face from the ideological function it was originally built to perform.21 Virilio’s “inorganic individual” has, it would seem, become fully computable. The political pliability of Hollywood stars, who can be deployed to humanise imperial violence in one cycle and then redeployed to critique it in the next, now matches the plasticity of synthetic images in the service of real-time counterpropaganda.

Collage of screenshots of likenesses of Hollywood actors from AI film trailer on YouTube

On a tangential note, it is impossible to place the richness of Iranian visual culture — the hypnotic stillness of Iranian realist cinema and the mesmerising symmetry of Persian architecture — next to this rapid churn of AI-generated videos. The aesthetic distance between that long tradition and a LEGO Netanyahu could not be greater. But perhaps this absence is the point. After all, these parodies are not attempting to extend or replace Iranian visual culture; nor are they claiming to be cinema. Rather, as uncanny weapons in asymmetric image warfare, they turn Hollywood’s visual monopoly against itself, simulating the mass and momentum of stardom and spectacle to throw it off balance. The cathartic laughter we experience in watching this pirate appropriation of the entire absurd apparatus — from stars to franchises — is not in spite of its artificiality, but because of it. Hollywood, however, is yet to be in on its own joke.


Critics and cinephiles tend to be squeamish about AI images, and this discomfort has only intensified as the technology has rapidly moved towards an uncomfortable photorealism. As models have expanded through massive corpora and compute, this hyperscale teleology has also alienated a generation of artists who had found creative possibility in earlier, more erratic models, exploiting the unpredictabilities of the latent space to steer image generation towards a computational surrealism. But the characteristic anatomical errors, synthetic smoothening effects, and kinetic inconsistencies that once made AI images legible as flawed images are fast disappearing.

To meet the moment, therefore, we must look beyond dismissive connotations of slop. Though the term usefully captures a widespread aesthetic revulsion towards these images, it risks — as described in this article — flattening differentiated political contexts into an undifferentiated mush of pixels. More specifically, it smuggles in a Euro-American sensorial discomfort toward what postcinema scholar Shane Denson has helpfully theorised as “discorrelated images” — computational images that have slipped free of the perceptual and temporal scales through which human vision operates phenomenologically.22 To mock or mourn this slippage as slop is to remain largely indifferent to the material conditions that make these images possible, and the political uses to which they are already being put. Roland Meyer has aptly described the visual environment of AI images as “platform realism” — a second-order aesthetic derived from past images, optimised for consumer expectations, and filtered through “white, Western, male, middle-class aesthetic values”.23 But once we move away from generalised anxieties about the statistical corruption of visual culture, and study the more specific shifts happening globally around IP, and around the simultaneous production and parodification of spectacle, these AI videos open up a contradiction in existing visual culture that platform realism — like slopaganda — alone cannot account for.

Separated by over a decade, Hito Steyerl’s conceptualisation of the “poor image” and the “mean image” were never meant to describe the same thing. On the one hand, the poor image is “a copy in motion”, degraded through piracy and compression, losing resolution as it defies copyright and gains circulation.24 On the other hand, mean images are “statistical renderings”, replacing photographic indexicality and political contradiction with stochastic probability.25 Arguably, in the case of the counterpropaganda AI videos, these two visual formations have begun to bleed into one another, as the mean image enters the poor image’s circuits of informal distribution, acquiring both its pirate circulation and political charge. As Chinese and Iranian AI war videos circulate through Telegram channels, recompressed and reposted across Big Tech platforms despite bans, the mean image unexpectedly acquires the fugitive quality of the poor image.

However, in our enthusiasm for this inversion, let’s not mistake the weaponisation of AI video for some kind of revolution, or the parodification of spectacle for the dismantling of the spectacular society altogether. My suggestion, therefore, is also not that AI counterpropaganda videos under asymmetric image warfare should be treated as a grand redemption narrative for hyperscale AI itself. Generative AI remains, by any sober accounting, a net negative — as an instrument of extraction and surveillance that violates Hollywood IP with the same casual indifference with which it exploits precarious data workers, dispossesses artists of their creative labour, and extracts the planet’s resources. However, in an asymmetric conflict where one side is granted impunity despite bombing over a hundred schoolchildren and the other side is condemned for blowing up detested data centres, it would be hypocritical to not contextualise Iran’s AI counterpropaganda as a net positive against the hegemony of existing war spectacle.

This article’s wager, then, is that AI images — copyright disputes notwithstanding — have the potential to erode the visual monopoly of Hollywood’s military-entertainment complex from within. Past wars in Vietnam, the Gulf, and Iraq produced propaganda that made imperial violence appear necessary and noble. Indeed, future U.S. and Israeli joint productions may well attempt the same for their ongoing war crimes in Palestine, Lebanon, Yemen, and Iran. But their efficacy as spectacle may well be diminished in the long run, their spell broken, when counterpropaganda can be generated at computational speed and negligible cost from a basement studio. Or so one can hope. For now, we witness the curious inversion of Hollywood’s own visual grammar, not through the guerrilla commitments of Third Cinema, but through the repurposing of AI platforms that were never really designed for “countervisuality”.26 The dialectic of slop and spectacle, and of pastiche and propaganda, offers no anticolonial guarantees — but necessary openings born of fatigue, and moments of cathartic laughter in the face of asymmetric image wars.

Notes

  1. Fredric Jameson, Postmodernism, or, The Cultural Logic of Late Capitalism, 1991. [^]

  2. Guardian News, “Donald Trump shares bizarre AI-generated video of ‘Trump Gaza’”, YouTube, https://www.youtube.com/watch?v=PslOp883rfI [^]

  3. Jameson, Postmodernism, or, The Cultural Logic of Late Capitalism, p. 17. [^]

  4. David L. Robb, Operation Hollywood: How the Pentagon Shapes and Censors the Movies, 2004, p. 37. [^]

  5. Carl Boggs and Tom Pollard, The Hollywood War Machine, 2016, p. 1. [^]

  6. Paul Virilio, War and Cinema: The Logistics of Perception, 1989. [^]

  7. Narges Bajoghli, “In the Room with Iran’s Social Media Savants”, New York Magazine, 7 April 2026. [^]

  8. FastOrange, “CCTV AI Propaganda Video: White Eagle Alliance vs. the Persian Cats,” YouTube, https://www.youtube.com/watch?v=5dGY0_pgkv8 [^]

  9. Michał Klincewicz et al., “Slopaganda: The Interaction between Propaganda and Generative AI”, Filosofiska Notiser, 2025. [^]

  10. Hayden Field, “Why OpenAI Killed Sora”, The Verge, 28 March 2026. [^]

  11. Jianhong Bai et al., “SemanticGen: Video Generation in Semantic Space”, arXiv, 2025. [^]

  12. Ning Zhang et al., “The ‘Eastern Data and Western Computing’ Initiative in China Contributes to Its Net-Zero Target”, Engineering, 2025. [^]

  13. Dan Milmo and Andrew Pulver, “‘It’s over for us’: release of new AI video generator Seedance 2.0 spooks Hollywood”, The Guardian, 13 February 2026. For the MPA’s response, see: Gene Maddaus, “Motion Picture Association Pushes ByteDance to Curb Seedance 2.0 AI Infringement”, Variety, 20 February 2026. [^]

  14. Dhruv Rathee, “Reality of Dhurandhar Film”, YouTube, https://www.youtube.com/watch?v=wWIJNCU8OOs [^]

  15. Kyle Chayka, “The Team Behind a Pro-Iran, Lego-Themed Viral-Video Campaign”, The New Yorker, 2 April 2026. [^]

  16. The Independent, “Iran State Media Share Lego Propaganda Video,” YouTube, https://www.youtube.com/watch?v=wo7e2OjyEBo [^]

  17. Jameson, Postmodernism, or, The Cultural Logic of Late Capitalism, p. 17. [^]

  18. Virilio, War and Cinema, p. 41. [^]

  19. Virilio, War and Cinema, p. 25. [^]

  20. Vandahood Live, “IRAN WAR - The Movie,” YouTube, https://www.youtube.com/watch?v=FDeBbzaj8oA [^]

  21. Richard Dyer, Stars, 1979, p. 34. [^]

  22. Shane Denson, Discorrelated Images, 2020. [^]

  23. Roland Meyer, “Platform Realism: AI Image Synthesis and the Rise of Generic Visual Content”, Transbordeur: Photographie histoire société 9, 2025, p. 17. [^]

  24. Hito Steyerl, “In Defense of the Poor Image”, e-flux, 2009. [^]

  25. Hito Steyerl, “Mean Images”, New Left Review, March–June 2023. [^]

  26. Nicholas Mirzoeff, The Right to Look: A Counterhistory of Visuality, 2011. [^]