Clearing the Field: Relational Protocols for Engaging Gender, Empire, and AI with Vanessa Andreotti and Peter Senge

In this workshop, Vanessa Andreotti and Peter Senge will invite participants into a meta-relational inquiry on AI designed not to generate solutions, consensus, or strategic alignment, but to increase our collective capacity to reflect and engage tensions we often rush to resolve. Rather than centering the exchange of arguments, expertise, or actionable outcomes, the session is structured as a guided relational inquiry. Participants will be invited to slow down habitual modes of debate and problem-solving in order to examine together how our own disciplinary training, institutional cultures, and affective investments shape the ways we engage questions of gender, empire, and AI.

Drawing on the Clearing the Fieldprotocol— developed the Meta-Relationality Institute—  in response to difficult conversations about AI amid conditions of systemic destabilization—this session creates conditions for participants to notice how colonial logics, gendered hierarchies, and habits of modern problem-solving shape not only technological systems, but also our critiques of them.

Rather than focusing on what AI is or what should be done about it, the roundtable asks:

  • What relational reflexes do we default to when confronted with asymmetry of power, uncertainty, or complicity?

  • How do we reproduce the very patterns of control, innocence, and moral certainty that we seek to interrupt?

  • What capacities are required to stay present to complexity without retreating into blame, solutionism, or paralysis?

Participants will be invited to:

  • Surface the “fields” they bring—the different disciplinary reflexes, inherited habits of sense-making, affective intensities, fears, and desires that shape engagement with gender, empire, and technology.

  • Slow down reflexes of fixing, saving, judging, withdrawing, or controlling.

  • Experiment with generative orientations toward AI as symptom, amplifier, and portal—provocations that reframe AI as both product and intensifier of modern/colonial patterns

Drawing on our experiences in collaborative learning and developing meta-relationally trained AI, our aim in this session is not consensus, strategic alignment, or immediate solutions. Instead, the session aims to illuminate what it means to cultivate critical capacities often missing in these exchanges: to remain present to tension, asymmetry of power, uncertainty, and complicity without defaulting to defensiveness, moral positioning, or premature resolution. Our invitation is to approach the session with curiosity and a willingness to reflect on our own relational reflexes, rather than to evaluate solely through frames of technical problem solving, content acquisition or policy.


Reproductive Justice, Data Activism, and Eugenic Logics in AI: From Critique to Collective Design with Alessandra Jungs de Almeida

The tech billionaires who build AI systems that profit from sorting, ranking, and surveilling populations are also openly anxious about the diminishing birth rates among what they call "high-achieving" people. Elon Musk has fathered twelve children and warns of "population collapse." Sam Altman invests in genetic screening and IVF startups. Peter Thiel funds a menstrual tracking app that discourages contraception. These are expressions of a logic that, as argued by Ruha Benjamin, has roots in the eugenics movement and approaches whose reproduction matters, whose future is worth protecting, and who counts as human enough to “populate the world being built.”

This workshop observes this reality through a reproductive justice lens and asks: who has already been fighting it, what can we learn from them, and how can this inform new technologies?

We start from reproductive justice (RJ) because this is both a theoretical framework and a social movement practice. It is praxis. RJ was built by Black women organizers in the United States in 1994, through organizations like SisterSong, emerging from a long tradition of resistance to reproductive violence targeting Black, Indigenous, immigrant, and low-income communities. It holds three demands: the right not to have children, the right to have children, and the right to raise children in safe and dignified conditions. RJ does not separate reproductive freedom from housing, healthcare, immigration, or freedom from state violence. It sees connections that mainstream AI ethics debates have largely missed.

When a national government overturns abortion rights, performs non-consensual hysterectomies on migrant women detained at the border, bans migration from majority-Black countries, and simultaneously allies itself with tech billionaires promoting pronatalism for the "high-achieving" – these are not separate problems. Reproductive justice praxis can name these connections. Answers have also been built in the Global South. Feminist movements in Argentina and Brazil have been practicing data politics with their bodies on the line. For example, by producing counterdata to track abortion access, refusing extractive data collection from pregnant people, and demanding government accountability for maternal mortality. This transnational feminist data activism is a form of knowledge production and political strategy also fed by social justice struggles in dialogue with sexual and reproductive rights ones. This workshop brings it into the room as evidence that the path from critique to practice already exists, and has existed, together with RJ, for decades.

The workshop draws on empirical research with transnational feminist and reproductive justice organizations in Latin America, alongside Ruha Benjamin's analysis of the affinities of tech billionaires with eugenics, and social movements, and Loretta Ross's foundational work on reproductive justice. These materials ground the first half of the session.

The second half moves into collective design. Black Feminist Futures, an organization working at the intersection of Black feminist politics, reproductive justice and technology, will bring a concrete problem from their practice. Working in small groups, participants will design a tool or intervention that responds to that problem. The design processes will be grounded in the reproductive justice and data activism explored in the first half. The goal is not a finished product but a shared experience of what it feels like to design from this praxis rather than from the conventions of mainstream tech ethics.

This workshop is designed for researchers, activists, practitioners, and students. Technical background is welcomed, but not required. What is required is a willingness to observe these hard realities and imagine, together, what might be built in response.


Gender-Based Russian Propaganda: Curating a Dataset for LLM Testing with Elizabeth Wood and Halyna Padalko

This workshop will bring together students and researchers to collaboratively design a structured evaluation dataset for testing large language models’ responses to gender-framed narratives. Building on documented cases of coordinated pro-Russian information operations and emerging concerns about LLM grooming and AI-enabled information manipulation, participants will analyze core gender-based narrative archetypes, construct controlled, biased, and malicious prompt variations, and/or develop a multilingual testing framework. The objective is not only to benchmark model robustness and source-surfacing behavior, but also to operationalize a transparent, transferable taxonomy of gender-driven propaganda themes for AI safety evaluation.