AI can be programmed to apply ethical considerations, but not to have the deeper moral reasoning called “virtue.”
JUL 13, 2024 - An ongoing series.
“Ethical AI” is a common discussion topic these days. But according to an article by Yale professor Jennifer Herdt, although it is a good thing to work toward AI’s compliance with core ethical norms and principles, there is also a problem. Although one might speak of ethical AI, creating ethical artificial agents (entities we are increasingly encountering as customer service representatives, for example) is another thing altogether.
In fact, she writes, AI chatbots are being trained to be maximally deceptive, to engage with us in ways that convey caring and concern that are in fact wholly simulated. Indeed, it has been suggested that AI exhibits features characteristic of psychopathy: “a lack of empathy for others, oftentimes exhibited in the process of accomplishing an aim or desired outcome by almost any means and without concern for how the outcome affects others.” But in fact AI lacks not just empathy but any sort of care or concern. “Literally nothing matters to AI…. It should therefore be a matter of great concern that we are becoming more and more accustomed to the deceptions bound up with simulated care and attention, as well as the frictionlessness of relationships with carebots that cater to our every whim.”
A key to Herdt’s argument is that AI cannot match the deep, biologically rooted foundations of human moral thought. AI systems can mimic—even mimic some moral nuances—but in spite of the analogies between the brain and AI’s neural networks, there is nothing in AI, she says, that matches the primary reinforcers that are biologically rooted and that shape living neural networks. “These,” says Herdt, “are not learned but are in place prior to the capacity to learn, rooted in the evolutionary character of life itself. Nothing matters for AI, so nothing matters to AI.”
To ram home this point: AI can never equal human moral reasoning because it would require a biological foundation laid down before the AI equivalent of ethics even begins.
Similarly, although AI can already be said to display goal-directed learning, “the goals of an animal belong to it in a deep sense; living things have goals before they have consciousness; in evolving consciousness, they become aware of what already mattered for their continued existence, and in evolving self-consciousness, they become aware of themselves as beings to whom things matter, and by the same token aware also of others to whom and for whom things matter. (Italics mine.)
“Nothing matters to AI,” Herdt concludes. “AI is a useful, if dangerous, tool, and it is up to us to use it for good, for the things that truly matter.”
“Moral” AI?
For me, it is hard to imagine a “moral” AI system that is capable of the simplest of discriminations human beings make as actual moral agents all the time: When is it OK to break or bend a rule (e.g., to tell a small lie) in the interest of a greater good? This is complex moral behavior because one cannot say that anything goes as long as the outcome is good. There have to be guidelines in place, but they are not fixed in stone or in code.
AI systems would probably make bad parents because parenting often involves building up a child by complimenting what they are trying to become, not just what they have become to this point. That is, lying to them. At the heart of such a situation is altruism, something I fear will always be out of reach of AI systems. They might fake it, sure. As a well-known saying has it (attribution is wildly different and uncertain), “The most important things in life are honesty and sincerity. If you can fake those, you’ve got it made.”
Is it possible to conceive of AI ethics as anything more than Kantian? That is, with hard rules about things like lying (which is always wrong, according to Kant), though simulating nuance to manipulate customers and users.
Ethics in real life is not only about following rules, but balancing rules against the need to arrive at positive outcomes—hopefully not selfishly but, rather, altruistically and empathetically. This involves a deeper moral faculty that is called “virtue” or moral discernment. Virtue-based ethics builds on biological and experiential dimensions of ethics; incorporates received and proven wisdom from
”the ages”; includes some widely accepted and helpful general moral rules; and then strives to combine them in a way that creates an environment of human thriving and excellence.
With ethics, properly conceived, we are not just playing a single instrument and following the notes. We are the virtuous orchestra conductor who is trying to find the right balance among the players to interpret the notes on the page and elicit a truthful experience that resonates with a deep core of humanness.
I can imagine an AI presence capable of assisting human beings with the complex considerations necessary to make decisions rooted in virtue. And that is a worthy thing. But I cannot imagine an AI presence capable of being virtuous. You can’t simulate that.