Technofascism and AI with Catherine D’Ignazio
This workshop will open with a short presentation and discussion on the emerging concept of "technofascism" and how it relates to historical and contemporary forms of fascism. Then, we will work on ideas at the intersection of technofascism and design: How can we resist fascist incursions into technology? And conversely, how can we create AI systems that support democracy, human flourishing, and planetary health?
We will spend at least an hour prototyping designs in small groups. Tech or coding experience is welcome but definitely not required (prototypes can be on paper!) Some ideas that we could work on include:
Rubrics to evaluate technofascist technologies
Zine of weird billionaire ideologies
Zine about online misogyny and technofascism
Decentralized, privacy-preserving and federated architectures for AI
Community governance models for AI training data
Tools to help tech workers unionize
Tools to help researchers and journalists audit LLMs for fascist propaganda
Your Idea Here! Bring an idea and workshop it!
Clearing the Field: Relational Protocols for Engaging Gender, Empire, and AI with Vanessa Andreotti and Peter Senge
In this workshop, Vanessa Andreotti and Peter Senge will invite participants into a meta-relational inquiry on AI designed not to generate solutions, consensus, or strategic alignment, but to increase our collective capacity to reflect and engage tensions we often rush to resolve. Rather than centering the exchange of arguments, expertise, or actionable outcomes, the session is structured as a guided relational inquiry. Participants will be invited to slow down habitual modes of debate and problem-solving in order to examine together how our own disciplinary training, institutional cultures, and affective investments shape the ways we engage questions of gender, empire, and AI.
Drawing on the Clearing the Fieldprotocol— developed the Meta-Relationality Institute— in response to difficult conversations about AI amid conditions of systemic destabilization—this session creates conditions for participants to notice how colonial logics, gendered hierarchies, and habits of modern problem-solving shape not only technological systems, but also our critiques of them.
Rather than focusing on what AI is or what should be done about it, the roundtable asks:
What relational reflexes do we default to when confronted with asymmetry of power, uncertainty, or complicity?
How do we reproduce the very patterns of control, innocence, and moral certainty that we seek to interrupt?
What capacities are required to stay present to complexity without retreating into blame, solutionism, or paralysis?
Participants will be invited to:
Surface the “fields” they bring—the different disciplinary reflexes, inherited habits of sense-making, affective intensities, fears, and desires that shape engagement with gender, empire, and technology.
Slow down reflexes of fixing, saving, judging, withdrawing, or controlling.
Experiment with generative orientations toward AI as symptom, amplifier, and portal—provocations that reframe AI as both product and intensifier of modern/colonial patterns
Drawing on our experiences in collaborative learning and developing meta-relationally trained AI, our aim in this session is not consensus, strategic alignment, or immediate solutions. Instead, the session aims to illuminate what it means to cultivate critical capacities often missing in these exchanges: to remain present to tension, asymmetry of power, uncertainty, and complicity without defaulting to defensiveness, moral positioning, or premature resolution. Our invitation is to approach the session with curiosity and a willingness to reflect on our own relational reflexes, rather than to evaluate solely through frames of technical problem solving, content acquisition or policy.
Reproductive Justice, Data Activism, and Eugenic Logics in AI: From Critique to Collective Design with Alessandra Jungs de Almeida
The tech billionaires who build AI systems that profit from sorting, ranking, and surveilling populations are also openly anxious about the diminishing birth rates among what they call "high-achieving" people. Elon Musk has fathered twelve children and warns of "population collapse." Sam Altman invests in genetic screening and IVF startups. Peter Thiel funds a menstrual tracking app that discourages contraception. These are expressions of a logic that, as argued by Ruha Benjamin, has roots in the eugenics movement and approaches whose reproduction matters, whose future is worth protecting, and who counts as human enough to “populate the world being built.”
This workshop observes this reality through a reproductive justice lens and asks: who has already been fighting it, what can we learn from them, and how can this inform new technologies?
We start from reproductive justice (RJ) because this is both a theoretical framework and a social movement practice. It is praxis. RJ was built by Black women organizers in the United States in 1994, through organizations like SisterSong, emerging from a long tradition of resistance to reproductive violence targeting Black, Indigenous, immigrant, and low-income communities. It holds three demands: the right not to have children, the right to have children, and the right to raise children in safe and dignified conditions. RJ does not separate reproductive freedom from housing, healthcare, immigration, or freedom from state violence. It sees connections that mainstream AI ethics debates have largely missed.
When a national government overturns abortion rights, performs non-consensual hysterectomies on migrant women detained at the border, bans migration from majority-Black countries, and simultaneously allies itself with tech billionaires promoting pronatalism for the "high-achieving" – these are not separate problems. Reproductive justice praxis can name these connections. Answers have also been built in the Global South. Feminist movements in Argentina and Brazil have been practicing data politics with their bodies on the line. For example, by producing counterdata to track abortion access, refusing extractive data collection from pregnant people, and demanding government accountability for maternal mortality. This transnational feminist data activism is a form of knowledge production and political strategy also fed by social justice struggles in dialogue with sexual and reproductive rights ones. This workshop brings it into the room as evidence that the path from critique to practice already exists, and has existed, together with RJ, for decades.
The workshop draws on empirical research with transnational feminist and reproductive justice organizations in Latin America, alongside Ruha Benjamin's analysis of the affinities of tech billionaires with eugenics, and social movements, and Loretta Ross's foundational work on reproductive justice. These materials ground the first half of the session.
The second half moves into collective design. The second half moves into a collective design. Martina Ferretto, a feminist scholar and activist who is part of the Argentinian National Campaign for the Right to Legal, Safe and Free Abortion and of the feminist collective for reproductive justice Incidencia Feminista, will bring a concrete problem from her practice.
This workshop is designed for researchers, activists, practitioners, and students. Technical background is welcomed, but not required. What is required is a willingness to observe these hard realities and imagine, together, what might be built in response.
Gender, Empire and AI in Russian Propaganda with Elizabeth Wood
Gender, Empire and AI in Russian Propaganda,” will bring together students and researchers to discuss the ways in which Russian propaganda campaigns are devoted to building “imperial” designs predicated on gender assumptions and norming. Participants will be given a variety of scenarios to consider. We will then examine the ways that AI in Russia has been mobilized to disseminate different propaganda for different audiences, both domestically and internationally. By analyzing core gender-based narrative archetypes, we will work together to learn to distinguish neutral, biased, and malicious prompt variations that can be (and have been) used with AI
Gaming Labor: Understanding Algorithmic Discrimination Through Play with Farah Qureshi
AI and automated decision making technologies have been influencing access to labor for at least 25 years. From 2018, generative AI more substantially changed the ways jobs were advertised, roles managed, and candidates analyzed. When the data used to train these automated systems (our own social history) carries histories of prejudice, automated analyses reproduce discrimination and exacerbate inequalities, especially along gendered lines in platform/gig work and automated application analysis. Instead of simply hearing these widely shared pitfalls of automated decision making, gaming offers an embodied opportunity to understand the harms of AI on everyday life. This workshop will explore a series of precarity simulator games which can be explored in your own time before coming together for an open discussion on what AI in labor means for people.

