Nir Eisikovits
Nir Eisikovits’ is Professor of Philosophy and Founding Director of the Applied Ethics Center at UMass Boston. His scholarship addresses moral and political dilemmas arising from war, conflict, and the use of emerging technologies. In the context of artificial intelligence, his work examines the ethical risks of autonomous systems, the militarization of AI, and the broader political consequences of technological power. In addition to his academic research, he advises organizations working on conflict resolution and writes frequently for public audiences on political violence and the ethics of technology.
James Hughes
James Hughes is a bioethicist and sociologist who serves as Associate Provost for Institutional Research, Assessment, and Planning at UMass Boston and as Senior Research Fellow at its Applied Ethics Center. In 2005, he co-founded the Institute for Ethics and Emerging Technologies with philosopher Nick Bostrom and has since served as its Executive Director. His scholarship explores the social and political implications of transformative technologies, including artificial intelligence, human enhancement, and biotechnology. Through his academic leadership, editorial work, and public engagement, he advances debate about how democratic societies should govern emerging technologies in ways that promote justice, flourishing, and human dignity.
Harvey Lederman
Harvey Lederman is Professor of Philosophy at the University of Texas at Austin and Co-Principal Investigator of the AI and Human Objectives Initiative at the School of Civic Leadership. His work spans ancient Chinese philosophy, philosophical logic, epistemology, and philosophy of language, reflecting a longstanding interest in how conceptual frameworks shape human understanding. In recent years, he has turned his attention to artificial intelligence, exploring its implications for human self-conception, agency, and the nature of meaningful work. His research examines how emerging technologies challenge established philosophical accounts of reason, value, and human purpose.
Kay Mathiesen
Kay Mathiesen is Associate Professor of Philosophy at Northeastern University. Her research focuses on information and computer ethics, with particular attention to questions of justice in the digital environment. Drawing on social epistemology, ethics, social philosophy, and political philosophy, she examines the rights and responsibilities of individuals and communities as seekers, sources, and subjects of knowledge and information. Her work explores how norms governing access, control, and dissemination of information shape democratic life and human rights. She has written extensively on issues including misinformation, digital privacy, and the ethical challenges posed by contemporary information technologies, contributing to debates about how digital systems can better support equity, accountability, and meaningful participation.
Henry Shevlin
Henry Shevlin is Associate Director of the Leverhulme Centre for the Future of Intelligence at the University of Cambridge. Trained in philosophy of mind and cognitive science, his research examines the conceptual foundations of intelligence and their ethical significance. His work on artificial intelligence investigates human–AI relationships, as well as the connections between consciousness, creativity, perception, and moral status. By bridging philosophy, cognitive science, and technology ethics, he contributes to ongoing debates about how advances in AI reshape our understanding of minds—both natural and artificial.