Document Type

Article

Publication Date

2-16-2024

Journal Title

Theory and Research in Education

DOI

https://doi-org.sandiego.idm.oclc.org/10.1177/14778785241231561

Version

Post-print: the version of the article having undergone peer review but prior to being published

Disciplines

Philosophy

Abstract

Artificial intelligence technologies have become a ubiquitous part of human life. This prompts us to ask, ‘how should we live well with artificial intelligence?’ Currently, the most prominent candidate answers to this question are principlist. According to these approaches, if you teach people some finite set of principles or convince them to adopt the right rules, people will be able to live and act well with artificial intelligence, even in an evolving and opaque moral world. We find the dominant principlist approaches to be ill-suited to providing forward-looking moral guidance regarding living well with artificial intelligence. We analyze some of the proposed principles to show that they oscillate between being too vague and too specific. We also argue that such rules are unlikely to be flexible enough to adapt to rapidly changing circumstances. By contrast, we argue for an Aristotelian virtue ethics approach to artificial intelligence ethics. Aristotelian virtue ethics provides a concrete and actionable guidance that is also flexible; thus, it is uniquely well placed to deal with the forward-looking and rapidly changing landscape of life with artificial intelligence. However, virtue ethics is agent-based rather than action-based. Using virtue ethics as a basis for living well with artificial intelligence requires ensuring that at least some virtuous agents also possess the relevant scientific and technical expertise. Since virtue ethics does not prescribe a set of rules, it requires exemplars who can serve as a model for those learning to be virtuous. Cultivating virtue is challenging, especially in the absence of moral sages. Despite this difficulty, we think the best option is to attempt what virtue ethics requires, even though no system of training can guarantee the production of virtuous agents. We end with two alternative visions – one from each of the two authors – about the practicality of such an approach.

Notes

Nicholas Smith is an assistant professor at Alabama Agricultural and Mechanical University. His primary research interests are the intersection of virtue ethics and artificial intelligence, and the epistemology of religion. Darby Vickers is an assistant professor of philosophy at University of San Diego. Her research focuses on questions at the intersection of ethics, epistemology, and philosophy of education. She draws much of the inspiration for her work on contemporary issues from ancient Greek philosophy, particularly Plato and Aristotle.

Included in

Philosophy Commons

Share

COinS