Programme EVENT
THU 18. SEP 7–9 pm

the Art of Composition

Collaborative investigations against weaponized AI


Júlia Nueno Guitart, Forensic Architecture
Rebecca Mignot-Mahdavi, Dimitri van den Meersche, Interest Group on International Law and Technology (ESIL)
Marit Seyer and worldwide members of the Stop Killer Robots Coalition:
Salena Barry UK, Illés Katona Hungary, Yanitra Kumaragurup Sri Lanka, Omar Mboob Gambia
MC: Siavash Eshghi, Heinrich Böll Foundation.

Interventions by


Daphna Horenczyk Performance. The Tribunalistic Ritual. Soundscape: Brando Zores Szely. Special appearance: Soulcat.


AI has entered the realm of armed conflict. Behind the glossy PR about how AI facilitates and simplifies our lives, and businesses, darker experiments are under way. They happen in front of our eyes, yet we cannot see them. What is being initiated is a shift from AI gamification to AI weaponization.
Together with our expert collaborators, we ask: What does it take to resist abuses of power that are escalated to the scale of populations and nation states? How can counter-investigations and counter-alliances operate in order to successfully expose the use of AI models and infrastructures in conflicts?

C

onflict resolution is replaced by armed conflict loaded with AI. Even games are not just for fun: besides their gamification – the integration of game-design elements to motivate and engage users – their deeper layers can conceal instructions for war and other atrocities, be they military or terrorist. Ever more people, and other beings, are victims of yet another contingency, another assault, another crime against life. Unwittingly or brazenly, social and other media reinforce such sociopathic tendencies.

AI weaponization has become a global issue, as our partner, the Stop Killer Robot Coalition can testify. AI-supported repression by governmental, military, and corporate forces is directed against both individuals as well as entire populations, including a state’s own citizens, as research in human rights and international law shows. In some cases, AI has already been weaponized to extremely worrying degrees, as investigations, such as those of Forensic Architecture, into cases of individuated mass killings and genocide prove.

But even when we cannot see how AI is made to operate, there are ways to uncover and visualize it. Indeed, evidence-based investigations are the basis of public knowledge in non-transparent contexts. As such, counter-investigative alliances are necessary means to challenge powers that manipulate, exclude and even eliminate people.

Regardless of one’s political beliefs or ideological leanings, the notion that AI is on the verge (or may have already crossed the threshold) of making life-and-death decisions without human oversight should be profoundly unsettling to anyone still committed to rational and humane solutions to conflicts. This would stand in stark contrast to the transhumanist visions of unfettered control, where technological elites steer society toward a corporate-driven future governed by anarcho-authoritarian ideals.

AI weaponization must be countered with all the means available to a democratic society. We must finally realise how powerful we are if we act together and use our institutions properly. No one is above the law, including machines.

Presentations


Technolegal Machineries:
Law’s Grammar of Violence and Harm’s Invisibilization
Rebecca Mignot-Mahdavi, Dimitri van den Meersche,
Interest Group on International Law and Technology (ESIL)

T
his presentation examines law as an internal component of technolegal machineries comprising drones, AI-enabled battlefield systems, biometric infrastructures, and risk-based classificatory tools. Within these assemblages, legal categories, evidentiary thresholds, and operational protocols crystallise together with sensors, data flows, and algorithmic models to translate heterogeneous lives and environments into calculable targets, tolerable collateral damage, and administrable fragments. We will explore how the law, as well as juridified computation, facilitates scalable, modular violence while obscuring cumulative harms. The second part of the talk explores tactics for resistance: generating counter-lexicons, reconfiguring evidentiary standards, and cultivating insurgent jurisprudential practices that reattach law to lived suffering. We experiment law's re-grammaring—expanding the perceptual field so that presently invisible harms become legally cognisable and politically actionable. If the machinery currently narrows perception to what is metrically convenient, a renewed legal grammar may widen the aperture through which harm is seen, named, and contested.

Individuated Mass Killing:
A counter-forensics of computational violence in Gaza
Júlia Nueno Guitart, Forensic Architecture

T
hrough the collective work with Forensic Architecture for the genocide case brought by South Africa against Israel at the International Court of Justice, Júlia Nueno Guitart interrogates how genocide is structured through computational systems. This intervention centres on Gaza, examining how AI-guided targeting, evacuation orders, and biometric checkpoints fragment life into data points—triggering violence that radiates outward and erodes the conditions for life. These technologies, often described as precise or neutral, instead enable scalable and repeatable harm. The intervention proposes individuated mass killing as a framework for understanding how computation organises extermination, both locally and as part of a wider techno-political order.

Global Bodies, Human Voices:
Resisting Digital Dehumanisation through Distributed Solidarity
Marit Seyer and worldwide members of the Stop Killer Robots Coalition: Salena Barry UK, Illés Katona Hungary, Yanitra Kumaragurup Sri Lanka, Omar Mboob Gambia

T
his presentation unfolds across two acts. It begins with a collage of short video testimonies from campaigners around the world, each voicing why killer robots must be stopped. These fragments collectively expose how autonomous weapons systems enact digital dehumanisation: reducing people to data points, severing dignity from decision-making, and normalising automated violence. In the second act, one campaigner present on site engages in live dialogue with four others who join via point-cloud projections from different continents. This dispersed presence materialises the campaign’s insistence that resistance to digital dehumanisation is a shared, planetary responsibility. By weaving distant bodies into a single space of dialogue, the intervention asserts the urgency of banning autonomous weapons, foregrounds the necessity of human control, and calls on each of us to contribute to safeguarding our common humanity.