What Are We Talking About When We Talk About AI?

On October 9, 2024, the UConn Humanities Institute (UCHI) hosted an interdisciplinary symposium asking a deceptively simple question: What are we talking about when we talk about AI?

The answer, it turned out, was that it depends on who you ask.

Thanks to support from the Consortium of Humanities Centers and Institutes, we brought together scholars from UConn and the International University of Rabat (UIR) in Morocco, alongside representatives from industry (Hugging Face), to explore how disciplinary, cultural, and linguistic differences fundamentally shape—and often splinter—our understanding of how AI works and what it means for our collective future. The symposium was the culminating event of a project we've called "Reading Between the Lines." Over the preceding months, scholars from various disciplines engaged in conversations across continents, producing a podcast series that we envision as the first installment in an “AI Anti-Glossary”—a resource that foregrounds the divergent, even contradictory definitions at play in key research terms like "intelligence," "learning," and "justice." Rather than seeking to resolve these differences in pursuit of unified definitions, the project leans into the generative friction created when we recognize that we are often talking past one another, even as we imagine we are discussing the same thing.

AI Group Conference
What Are We Talking About When We Talk About AI? Conference

"Care begins with language." Ihsane Hmamouchi

The symposium's first panel centered on the charged term "care"—a concept that proved to be anything but self-evident. UConn assistant professor of communication Jiyoun Suk opened the session with a question: "How is AI changing how we care for each other?" What followed was a revealing constellation of perspectives as UConn Professor of English Anna Mae Duane joined UIR Vice-Dean of the Faculty of Medicine Ihsane Hmamouchi and Assistant Professor of Computer Science and Engineering Ouassim Karrakchou. Duane placed the dystopian trend of people “falling in love with chatbots” in a longer history in which seduction novels invited young readers to imagine romantic relationships in new and disruptive ways. Hmamouchi's presentation insisted that while she is a scientist, the work of medical diagnostics is deeply embedded in the human capacity to care for one another, even as that capacity manifests differently depending on the cultural and linguistic setting they practice in. Drawing on her recently published research, she addressed the challenges healthcare providers worldwide encounter when using AI in clinical settings, emphasizing that LLMs which do not fully understand non-English languages—including regional dialects—can flatten patient care into something unrecognizable. "Care begins with language," Hmamouchi insisted, directly challenging AI developers to find ways to recognize diversity of language and culture in their models.

AI Conference Panelists
All the panelists and chairs that participated in the symposium

The second session brought equally productive tensions to the term "literacy." Moderated by UConn English instructor Tina Huey, the panel convened Anke Finger (Professor of German and Comparative Literature), Arash Zaghi (Professor of Civil and Environmental Engineering), Ting-An Lin (Assistant Professor of Philosophy), and UIR computer science professor Hakim Hafidi. Finger's presentation drew on literary history to reveal how contemporary anxieties echo stories of AI-like "automatons" that emerged in German fiction over a century ago—suggesting that what feels unprecedented may actually be a recurring cultural pattern. Zaghi disrupted easy assumptions about access and inclusion, pointing to how students from lower-income areas haven't been taught to imagine such tools could be "for them"—reframing AI literacy as a question of educational equity rather than a site of debate among intellectuals. Lin complicated matters further from an ethicist's perspective, calling for a critical AI literacy that leans into the debate Zaghi eschews. Lin contended that true literacy means that users fully understand how AI outputs are generated and what biases may be embedded within them. "I want us to be careful of the tendency to think that technology can just solve everything," she cautioned, pushing back against the technological solutionism that often dominates AI discourse.

"The more unquestioning our trust of AI becomes, the more we rely on it to figure things out for us, the less reflective and creative we become. We know more facts; but we understand less." Michael Lynch

The final panel, chaired by UConn assistant professor of journalism Brad Tuttle, returned to the theme of care through the concept of rights—exploring how AI will transform labor rights and human rights in ways that test our commitments to one another. Michael Lynch (UConn Provost Professor of the Humanities) offered a stark warning: "The more unquestioning our trust of AI becomes, the more we rely on it to figure things out for us, the less reflective and creative we become. We know more facts; but we understand less."


Group Shot
AIhsane Hmamouchi, Vice-Dean of the Medical Faculty at the International University of Rabat, and three UCHI graduate assistants, from left to right, Tim Brown, Katrina Martinez and Alina Ahmed) Katrina is the GA whose appointment was made possible by CHCI support, and who has been preparing and editing all of our digital outputs.


Throughout the day, the process of translation—between disciplines, languages, and conceptual landscapes—revealed what humanistic inquiry has long understood: technical systems are never just technical, and human problems cannot be reduced to technical solutions. Instead, AI systems are built from metaphors, assumptions, and value systems that humanistic inquiry is uniquely equipped to interrogate and reimagine.

The symposium made visible a central paradox: the forms of intelligence most devalued in AI discourse—the embodied knowledge of caregivers, the contextual understanding embedded in language and culture, the relational capacities that make care meaningful—are precisely the forms of intelligence that resist standardization, automation, and reduction. In refusing to comply with technical logic, these ways of knowing constitute what we might call “alternative intelligences"—modes of understanding that insist on multiplicity, contradiction, and irreducible human complexity. As UCHI continues to build international partnerships and interdisciplinary collaborations around human-centered AI, we seek to continue working through how these embodied forms of intelligence might reshape not just how we think about AI, but how we design and deploy it.

Anna Mae Duane
Professor, English and American Studies
Director, University of Connecticut Humanities Institute
University of Connecticut