Global Humanities Institute 2024: Design Justice AI
DESIGN JUSTICE AI is a Global Humanities Institute that will explore community-centered, humanistic, and interdisciplinary engagement of “Generative AI,” the statistical modeling of human languages, communication, arts, and cultures. The main institute meeting will be held at the University of Pretoria, with anticipated dates of July 7-21, 2024.Back to Programs
DESIGN JUSTICE AI is a Global Humanities Institute sponsored by the Consortium of Humanities Centers and Institutes and the Mellon Foundation. This institute will explore community-centered, humanistic, and interdisciplinary engagement of “Generative AI” (the statistical modeling of human languages, communication, arts, and cultures), and is a partnership involving four university-based centers:
The principal collaborators include Lead PI Lauren M. E. Goodlad (Distinguished Professor of English & Comparative Literature, Chair of the Critical AI @ Rutgers initiative, and editor of Critical AI) and Colin Jager (Director of the Center for Cultural Analysis at Rutgers) in conjunction with co-PIs at each of the partnering centers: Matthew Stone (Professor and Chair of Computer Science at Rutgers), Katherine Bode (Professor of Literary and Textual Studies at ANU), Vukosi Marivate (Chair of Data Science at the University of Pretoria and lead for the Data Science for Social Impact Group), and Michael P. Lynch (Board of Trustees Distinguished Professor of Philosophy and former director of the Humanities Institute). You may find a full list of their collaborators below. The main institute meeting in 2024 will involve an application process that funds up to 20 additional interdisciplinary scholars.
For questions, please write to firstname.lastname@example.org.
The pre-institute meeting is tentatively scheduled for July 7-11, 2023 in Canberra at the campus of Australia National University. This initial meeting will put in place an application process to fund up to 20 interdisciplinary scholars, with special focus on early career and emerging scholars and community partners to join this group at the main institute meeting. We will announce the application process in July 2023.
The main institute meeting, DESIGN JUSTICE AI, will be held at the University of Pretoria, with anticipated dates of July 7-21, 2024 and will include collaborators from all four centers along with the scholars chosen through the application process. The DESIGN JUSTICE AI meeting at Pretoria will be partly hybrid in order to welcome the participation of interested scholars, technologists, and community collaborators worldwide.
We plan to invite distinguished speakers to join us via hybrid or virtual lectures so as to prioritize in-person participation by emerging scholars.
A post-meeting event will be scheduled for Fall 2024 and will likely take the form of a hybrid event on the Rutgers campus.
Research Goals and Questions
By now many people have heard about ChatGPT and other “large language models.” What they may not know is that these are examples of the rapid diffusion of so-called generative AI: machine learning technologies that simulate human languages, communication, arts, and cultures through the statistical modeling of vast troves of “scraped” internet data. Our Global Humanities Institute is inspired by the work of the Design Justice Network, a hub for people committed to embodying and practicing the Design Justice Network Principles. Longstanding DJN member Sasha Costanza-Chock (former fellow of the Berkman-Klein Center for Internet & Society and Head of Research for OneProject.org) wrote Design Justice: Community-Led Practices to Build the Worlds We Need (2020) to advance community-led design practices. Our approach to these practices and topics combines interdisciplinary critique, public humanities, and best practices from data science and digital humanities (DH), with collaborative research that strives to center people and cultures that have been marginalized by design processes.
DESIGN JUSTICE AI will cross disciplinary divides and reach out to affected communities as we foster creative thinking, model new forms of research, and produce resources for scholars and the general public. As commercial technologies aim to simulate and mediate human expression and creativity at an unprecedented scale, our Global Humanities Institute will seek interdisciplinary standpoints and fertile alliances that produce knowledge “from below”: through creative collaborations between researchers, students, and community partners. Our goal is not only to “critique” these fast-developing technologies, but also to envision ML systems that work in the public interest: i.e., safe, accountable, and inclusive systems that are receptive to many voices.
Through publication of blogs, research templates, interviews, experimental datasets, recorded lectures, pedagogical practices, and peer-reviewed articles and special issues, our institute will share resources that help to diffuse these critical methods. In doing so, we hope to help any campus to develop nuanced understanding of and engagement with “generative AI,” including robust pedagogical strategies, and the potential for community-centered research projects informed by design justice principles.
Our guiding questions include:
What would be lost from human creativity and diversity if writers or visual artists come to rely on predictive models trained on selective datasets that exclude the majority of the world’s many cultures and languages?
What frameworks or evaluation practices might help to concretize what is meant by “intelligence,” “understanding,” or “creativity”–for machines as well as humans? How might such humanistic interventions help diverse citizens to participate in the design and implementation of generative technologies and the benchmarks that evaluate them?
What are the strengths and weaknesses of current statistical models–which generate outputs probabilistically (by privileging dominant patterns) and selectively (based on scraped data)–in modeling the lived knowledge, embodied cognition, and metareflection that informs human communication, art, and cultural production?
If evidence suggests that “generative AI” is harmful–and/or counter to the professed object of enhancing human lifeworlds–what alternatives might be forged through community participation in research that rearticulates goals, and reframes design from the bottom up? What kinds of teaching, research, community practices, and policies might sustain these humanist-inflected and justice-oriented design processes?
Although the DESIGN JUSTICE AI outlook will not reject the potential utility of “generative AI” out of hand, our research questions go to the heart of what inclusive collaborations can contribute to the study of resource-intensive technologies that aim to monetize and “disrupt” human communication and creativity.
Rutgers University, Center for Cultural Analysis (CCA)
Brittney Cooper: Professor of Africana Studies/Women’s, Gender, & Sexuality Studies; a scholar of Black women’s intellectual history and race and gender politics, Cooper is currently working on sexism and racism in digital and social media contexts.
Alex Guerrerro: Professor of Philosophy; a J.D. who specializes in moral and political philosophy as well as African and Native American Philosophy, Guerrerro has taught a recent graduate seminar on the Ethics and Politics of AI.
Australian National University (ANU), Humanities Research Centre
Kate Henne: Director of the School of Regulation and Global Governance and the Justice and Technoscience Lab, Henne is also the Chief Investigator on the Humanizing Machine Intelligence Grand Challenge project; she researches the ways governance changes technological approaches.
Adrian Mackenzie: Professor of Sociology whose research on “AI”’s impact on contemporary cultures includes the linkages between platform infrastructure and prediction, and large image collections.
University of Pretoria, Centre for Advancement of Scholarship
Abiodun Modupe: Lecturer in Computer Science and specialist in the modeling of local African languages, Modupe is currently implementing a degree program in “Big Data Science” for the Data Science department.
Emma Ruttkamp-Bloem: Head of Philosophy, AI Ethics lead for the Centre for AI Research, and Chair of the Southern African Conference on AI Research (SACAIR), Ruttkamp-Bloem is an ethics policy researcher currently serving as a member of the UNESCO World Commission for Ethics of Scientific Knowledge and Technology (IUHPST).
University of Connecticut, Humanities Institute (UCHI)
Alexis L. Boylan: Director of Academic Affairs, UCHI and professor of Art and Art History and the Africana Studies Institute, Boylan works on the rights of visual artists in regard to “generative” technologies. Specifically, she is interested in how this tech will impact local art communities and markets globally.
Eleni Coundouriotis: Professor of English and Comparative Literary and Cultural Studies, Coundouriotis researches African literature and human rights.
Yohei Igarashi: Associate Professor of English and Coordinator of Digital Humanities and Media Studies for UCHI, Igarashi is working on collaborative projects with language models and has written on the relation between “generative” writing and literary history.
Design Justice Network (DJN)
- Wesley Taylor: Assistant Professor in the Department of Graphic Design at Virginia Commonwealth University, member of the Design Justice Steering Committee, Taylor is a print maker, graphic designer, musician, animator, educator, mentor and curator whose practice is rooted in social justice.
The institute organizers are grateful to all of the above faculty for their input as well as to CCA Business Manager Matt Leonaggeo, Rutgers Grants Specialist Justin Samolewicz, and the Critical AI @ Rutgers team (Kristin Rose, Jennifer Vilchez, Andi Craciun, Ang Li, and Jai Yadav).
For questions, please write to email@example.com.