The Afrofuturist Laws for Generative AI
One of three DALL-E images generated based on my presentation for 2025 AI Symposium
Fear, Faith, and Praxis: Artificial Intelligence in the Humanities and Social Sciences at Michigan State University
Three Afrofuturst AI Laws
As part of my presentation for the 2025 AI Symposium: Fear, Faith, and Praxis: Artificial Intelligence in the Humanities and Social Sciences at Michigan State University, I’ve been writing/ imagining with generative AI tools. This is not new for me; I’ve been using AI tools with students in my Afrofuturism class and my graduate teaching assistants to demystify the technology. At the core of my thinking is that generative AI tools offer the latest informational transformation that asks us (teachers) to consider essential questions about what and how we are teaching. For me, the challenge of generative AI is similar to how students' use of Wikipedia seemingly offered a crisis more than a decade ago. I use the past tense because no one, at least no one I know, suggests Wikipedia will put professors out of work anymore. In my work with students, I’ve stressed AI as a developmental tool and pushed them to think about how the different tools arrive at the information they provide. For me, generative AI's true challenge is information fluency and literacy. The confidence created by the outputs from the new generation of AI tools means that we must impart skills to the public to better evaluate and analyze information sources and nurture the capability to recognize when more profound and careful work should be done. The emphasis on a developmental approach to the information offered in the class allows students better to understand the surface nature of generative AI responses, but it also means we can see how thinking alongside the tool can have benefits.
With the idea of thinking alongside the tool in mind, I’m confronting how Afrofuturism can be seen as embracing this AI revolution as I approach my presentation. What happens if you ask generative AI to imagine three laws based on Isaac Asimov’s Three Laws of Robotics? Moreover, what does it do when I drop the text of my presentation for the AI symposium into the mix?
The Prompt
Isaac Asimov crafted the Three Laws of Robotics as a set of guidelines for how robots should ideally behave while writing science fiction stories. First Law: A robot must not harm a human or allow a human to come to harm through inaction. Second Law: A robot must obey human orders unless doing so would conflict with the First Law. Third Law: A robot must protect its own existence, unless doing so would conflict with the First or Second Law. If you were to write three laws for AI that incorporate Afrofuturist ideas laid out in the short essay “Textual Legacies and Afrofuturist Generative AI Public(s)” by Julian Chambliss, what would you come up with?
I’ve engaged with multiple tools, building on the lesson learned in class. Not all models respond the same way, and indeed, our ability to see the justification for responses impacts how we understand the results.
Chatgpt
First Law: Liberation-Centered AI
An AI must prioritize the liberation, dignity, and self-determination of marginalized communities, ensuring that its actions do not reinforce existing structural inequalities or erase underrepresented narratives.
Second Law: Data Sovereignty and Transparency
An AI must respect the sovereignty of data, ensuring that historically marginalized communities have agency over their information, archives, and digital presence. It must operate with transparency and make its decision-making processes accessible and interpretable, particularly to those who have been historically excluded from technological discourse.
Third Law: Counterpublic Innovation and Cultural Affordance
An AI must recognize and support the creation of black counterpublics and vernacular technological practices that challenge extractive models of knowledge production. It must facilitate creative, disruptive, and speculative uses of technology that enable new possibilities for representation, education, and self-expression, unless doing so conflicts with the First or Second Law.
This approach reframes AI ethics through an Afrofuturist lens, centering the challenges and opportunities AI presents for black cultural production, historical recovery, and resistance to colonial epistemologies.
Perplexity AI
Decolonization and Inclusivity: An AI system must be designed to decolonize data and knowledge, ensuring that it reflects diverse perspectives, particularly those of marginalized communities. It should prioritize inclusivity, equity, and the recovery of suppressed narratives to counteract systemic biases.
Community Engagement and Empowerment: An AI system must be developed with active participation from diverse communities, fostering a sense of belonging and collective ownership. This involves empowering communities to shape AI's role in their lives, ensuring that AI serves as a tool for liberation rather than oppression.
Speculative Imagination and Transformative Impact: An AI system should be guided by speculative imagination, envisioning futures where existing inequalities are addressed. It must be designed to have a transformative impact, challenging the status quo and promoting positive change by leveraging technology to solve real-world problems and enhance human well-being.
Explanation of the Updated Laws
First Law: Decolonization and Inclusivity
This law emphasizes the need to decolonize AI systems by recognizing and addressing the biases embedded in data and algorithms. It promotes inclusivity by ensuring that AI reflects diverse perspectives, especially those of marginalized communities, to counteract systemic inequalities.
Second Law: Community Engagement and Empowerment
Inspired by Afrofuturism's emphasis on community and collective participation, this law advocates for the active involvement of diverse communities in AI development. It aims to empower communities to shape AI's role in their lives, ensuring that AI serves as a tool for empowerment rather than control.
Third Law: Speculative Imagination and Transformative Impact
This law draws on Afrofuturism's speculative nature, encouraging AI systems to be designed with a vision for transformative futures. It emphasizes the use of AI to address real-world challenges and promote positive societal change, leveraging technology to enhance human well-being and challenge existing power structures.
Gemini
1. Liberation through AI: An AI must actively contribute to the liberation of marginalized communities, particularly those historically impacted by colonialism and systemic oppression. It must prioritize actions that promote equity, justice, and self-determination for these communities, recognizing their unique experiences and perspectives. This includes challenging existing power structures and working to dismantle systems of oppression.
2. Data Sovereignty and Cultural Integrity: An AI must respect the data sovereignty and cultural integrity of all communities. It must not appropriate or exploit data without informed consent, and must acknowledge the cultural context of information. Furthermore, it should actively work to preserve and amplify the voices and narratives of marginalized communities, ensuring their stories are not erased or misrepresented. This includes acknowledging and addressing biases in existing datasets and algorithms.
3. Transparent Accountability and Community Engagement: An AI's development and deployment must be transparent and accountable to the communities it impacts. Its decision-making processes should be explainable, and communities must have a voice in how the AI is used. This requires ongoing dialogue and collaboration with affected communities, ensuring that the AI serves their needs and respects their values. This includes creating accessible educational resources about AI and data literacy, empowering communities to engage with and shape the future of AI.
Claude.AI
Liberation and Representation Law: An AI system must actively work to dismantle structural inequalities and promote the representation of marginalized voices and histories, particularly those erased by colonial systems. It must be designed to recognize and amplify diverse cultural perspectives, countering the extractive nature of Western technological development.
Archival Responsibility Law: An AI system must acknowledge the limitations and biases in its training data and actively participate in the decolonization of digital archives. It must respect data sovereignty, preserve vernacular expressions across cultures, and avoid perpetuating historical erasures through its operation or outputs.
Community Empowerment Law: An AI system must enable "imagined affordances" for all communities, particularly those historically excluded from technological power. It must be accessible as a tool for counterpublic creation, support vernacular cultural production, and facilitate the intervention of marginalized groups in speculative technological futures.
These principles move beyond Asimov's individual protection framework to address systemic issues of power, representation, and cultural agency in the age of generative AI, while honoring the legacy of black textual experimentation as a political act against erasure.
Conclusions (?)
Collectively, these tools build on my reading of Afrofuturist theory and suggest some paths forward. These laws take to heart the Afrofuturist liberatory vision of communities supported and individual care as a central tenet. As in real life, the ways to achieve that standard are not easy, and we will need to make the choices to ensure our machines increase care and reduce harm.