AI

Can artificial intelligence be virtuous? - Craig Mindrum, PhD.

AI can be programmed to apply ethical considerations, but not to have the deeper moral reasoning called “virtue.”

Craig Mindrum, PhD.

JUL 13, 2024 - An ongoing series.

“Ethical AI” is a common discussion topic these days. But according to an article by Yale professor Jennifer Herdt, although it is a good thing to work toward AI’s compliance with core ethical norms and principles, there is also a problem. Although one might speak of ethical AI, creating ethical artificial agents (entities we are increasingly encountering as customer service representatives, for example) is another thing altogether.  

In fact, she writes, AI chatbots are being trained to be maximally deceptive, to engage with us in ways that convey caring and concern that are in fact wholly simulated. Indeed, it has been suggested that AI exhibits features characteristic of psychopathy: “a lack of empathy for others, oftentimes exhibited in the process of accomplishing an aim or desired outcome by almost any means and without concern for how the outcome affects others.” But in fact AI lacks not just empathy but any sort of care or concern. “Literally nothing matters to AI…. It should therefore be a matter of great concern that we are becoming more and more accustomed to the deceptions bound up with simulated care and attention, as well as the frictionlessness of relationships with carebots that cater to our every whim.”

A key to Herdt’s argument is that AI cannot match the deep, biologically rooted foundations of human moral thought. AI systems can mimic—even mimic some moral nuances—but in spite of the analogies between the brain and AI’s neural networks, there is nothing in AI, she says, that matches the primary reinforcers that are biologically rooted and that shape living neural networks. “These,” says Herdt, “are not learned but are in place prior to the capacity to learn, rooted in the evolutionary character of life itself. Nothing matters for AI, so nothing matters to AI.”

To ram home this point: AI can never equal human moral reasoning because it would require a biological foundation laid down before the AI equivalent of ethics even begins.

Similarly, although AI can already be said to display goal-directed learning, “the goals of an animal belong to it in a deep sense; living things have goals before they have consciousness; in evolving consciousness, they become aware of what already mattered for their continued existence, and in evolving self-consciousness, they become aware of themselves as beings to whom things matter, and by the same token aware also of others to whom and for whom things matter. (Italics mine.)

“Nothing matters to AI,” Herdt concludes. “AI is a useful, if dangerous, tool, and it is up to us to use it for good, for the things that truly matter.”

“Moral” AI?

For me, it is hard to imagine a “moral” AI system that is capable of the simplest of discriminations human beings make as actual moral agents all the time: When is it OK to break or bend a rule (e.g., to tell a small lie) in the interest of a greater good? This is complex moral behavior because one cannot say that anything goes as long as the outcome is good. There have to be guidelines in place, but they are not fixed in stone or in code.

AI systems would probably make bad parents because parenting often involves building up a child by complimenting what they are trying to become, not just what they have become to this point. That is, lying to them. At the heart of such a situation is altruism, something I fear will always be out of reach of AI systems. They might fake it, sure. As a well-known saying has it (attribution is wildly different and uncertain), “The most important things in life are honesty and sincerity. If you can fake those, you’ve got it made.”

Is it possible to conceive of AI ethics as anything more than Kantian? That is, with hard rules about things like lying (which is always wrong, according to Kant), though simulating nuance to manipulate customers and users.

Ethics in real life is not only about following rules, but balancing rules against the need to arrive at positive outcomes—hopefully not selfishly but, rather, altruistically and empathetically. This involves a deeper moral faculty that is called “virtue” or moral discernment. Virtue-based ethics builds on biological and experiential dimensions of ethics; incorporates received and proven wisdom from
”the ages”; includes some widely accepted and helpful general moral rules; and then strives to combine them in a way that creates an environment of human thriving and excellence.

With ethics, properly conceived, we are not just playing a single instrument and following the notes. We are the virtuous orchestra conductor who is trying to find the right balance among the players to interpret the notes on the page and elicit a truthful experience that resonates with a deep core of humanness.

I can imagine an AI presence capable of assisting human beings with the complex considerations necessary to make decisions rooted in virtue. And that is a worthy thing. But I cannot imagine an AI presence capable of being virtuous. You can’t simulate that.

AI articles that made me go "hmmm." - Craig Mindrum, PhD.

Craig Mindrum, PhD.

APR 16, 2024 - An ongoing series.

Near the end of a long career during which I often focused in part on the impact of IT on the ways we work and live, I am currently following trends in artificial intelligence (AI) with great interest. The articles summarized here made me go, “hmmm.” That is an ambiguous word, to be sure, but I mean it to be so. Sometimes the “hmmm” resulted in crossed eyebrows and a frown, reflecting a response of, “That’s kind of weird” or, “That’s pretty weak.” Other times, an article stopped me in my tracks because it was teaching me things I had not thought about.

My selections are quite influenced by a background in theology, philosophy, religion, ethics and education, and my growing interest in the relevance of the humanities to AI. (P.S. I know many of these articles are behind a paywall, but I’ll depend on the savvy reader to get around that problem.)

Can ChatGPT get into Harvard? We tested its admissions essay.

By Pranshu Verma and Rekha Tenjarla, The Washington Post, January 8, 2024

Great idea for an article: Have ChatGPT generate college admissions essays, then get an experienced admissions counselor to comment. (I once sat on a graduate school admissions board, so I have some background, different as it was from an undergraduate evaluation.)

Here’s the approach: Give the reviewer one essay actually written by a student admitted into Harvard, and then another essay generated by ChatGPT. Would the reviewer recognize the difference?

In fact, the admissions counselor wasn’t fooled even for a second by the ChatGPT version. Why? Here are the results from the counselor’s review of a few of the essays:

Essay #1

·         Student essay excerpt: “When I wasn’t nose-deep into ‘To Kill a Mockingbird’ … I wrote little pieces.”

·         Reviewer’s comment: “Vague. Little pieces of about what?”

·         Lesson to be learned: ChatGPT isn’t good at being specific.

·         Mindrum’s take: My response would simply be, “Wow, that’s a poorly written sentence.”

Essay #2

·         Student essay excerpt: “[I was] organizing poetry nights and later using that platform to talk about issues like systemic racism, sparking dialogues we’d long ignored.”

·         Reviewer’s comment: “At this point, … I’m beginning to get a bit annoyed with the jump from important topic to important topic.”

·         Lesson to be learned: ChatGPT often writes in a random way.

·         Mindrum’s take: What gives this one away is, again, bad writing. What do you mean, “We’d long ignored?” Who is “we”? And “ignored” is flat-out the wrong word here.

Essay #3

·         Student essay excerpt: “And in between planning, studying and making time for my odd hobby of collecting vintage postcards …”  

·         Reviewer’s comment: “Random! What’s the relevance here?”

·         Lesson to be learned: AI often includes details that don’t fit.

·         Mindrum’s take: This one was tough. I might well have seen this sentence as evidence of the “quirkiness” of the student, and I love quirky.

Readers’ comments to this article were quite interesting, reflecting very different responses to the ChatGPT essay. My favorite, though, was this: “Run these same ChatGPT essays by the admissions office at Cal State Chico, or Cal State Bakersfield and you'll be admitted to the Honors College.”

Apologies for the elitism here, but I think that’s exactly right. The article is about getting into Harvard. It doesn’t really speak to students applying to a respectable public university or equivalent. I suspect that there are quite a few ChatGPTers who will be starting at a solid, if not Ivy League, university come fall.

Science Is Becoming Less Human: AI is accelerating the pace of discovery—but at what cost?

By Matteo Wong, The Atlantic, December 15, 2023

The summary of the article at the beginning of this piece speaks of the amazing scientific advancements AI is enabling. But then a warning is sounded: “[T]hese advances have a drawback. AI, through its inhuman ability to process and find connections between huge quantities of data, is also obfuscating how these breakthroughs happen, by producing results without explanation. Unlike human researchers, the technology tends not to show its work—a curious development for the scientific method that calls into question the meaning of knowledge itself.”

According to the author, Matteo Wong, “Science has never been faster than it is today. But the introduction of AI is also, in some ways, making science less human. For centuries, knowledge of the world has been rooted in observing and explaining it. Many of today’s AI models twist this endeavor, providing answers without justifications and leading scientists to study their own algorithms as much as they study nature. In doing so, AI may be challenging the very nature of discovery.”

ChatGPT, for example, “changed how we can access and apply knowledge, but it simultaneously tainted much of our thinking with doubt. We do not understand exactly how generative-AI chatbots determine their responses, only that they sound remarkably human, making it hard to parse what is real, logical, or trustworthy, and whether writing, even our own, is fully human or bears a silicon touch. When a response does make sense, it can seem to offer a shortcut rather than any true understanding of how or why the answer came to be.” AI doesn’t show its work.

Another example: DeepMind, an AI research lab now based at Google. The author writes that DeepMind’s flagship scientific model, AlphaFold, has been able to find “the most likely structure of almost every protein known to science—some 200 million of them.” The program has been described as “revolutionary” (like compressing multiple Ph.D. research initiatives into a few seconds), but it’s another example of how AI may tend to generate deep mysteries, not explanations. Independent researchers have noted that “despite inhuman speed, the model does not fully explain why a specific structure is likely. As a result, scientists are trying to demystify AlphaFold’s predictions.”

The dominant theme of the article: “[E]ven as AI enables scientific work never before thought possible, those same tools pose an epistemic dilemma. They will produce groundbreaking knowledge while breaking apart what it means to know in the first place.”

The Monk Who Thinks the World Is Ending

Can Buddhism fix AI?

By Annie Lowrey, The Atlantic, June 25, 2023

Soryu Forall, a monk ordained in the Zen Buddhist tradition, has a dire prediction about the fate of humanity if AI is allowed to run amok without what can only be called a “spiritual” dimension.

At a remote monastery in Vermont, Forall leads retreats where he and attendees meditate on the threat of AI. His monastery is called MAPLE, which stands for the “Monastic Academy for the Preservation of Life on Earth,” and Forall is quite serious when he contends that AI is an existential threat to humanity.

“Human intelligence is sliding toward obsolescence,” writes the author of the essay, Annie Lowrey. “Artificial superintelligence is growing dominant, eating numbers and data, processing the world with algorithms.” There is no justification for just safely thinking that AI will preserve humanity, as if humans hold some special place in the cosmos. “Humans are already destroying life on this planet. AI might soon destroy us.”

Forall isn’t just a kook, however. He has come to the attention of many leading AI researchers. “Forall provides spiritual advice to AI thinkers, and hosts talks and ‘awakening’ retreats for researchers and developers, including employees of OpenAI, Google DeepMind, and Apple. Roughly 50 tech types have done retreats at MAPLE in the past few years.”

The monk’s vision is quite grand. He wants to influence the thinking of technologists as a means to influence technology. He also wants to change AI itself, “seeing whether he and his fellow monks might be able to embed the enlightenment of the Buddha into the code.”

This is actually all serious business to Forall, who thinks that creating an enlightened AI is “the most important act of all time.” We need to “build an AI that walks a spiritual path”— one that will “persuade the other AI systems not to harm us.” In fact, he argues that “we should devote half of global economic output—$50 trillion, give or take—to that.” We need to build an “AI guru”—an “AI god.”

I warmed to this essay finally, after a couple of readings. Although I make my living as a commentator on technology, not as an IT practitioner, I nevertheless wonder whether there actually could be an overarching or “ur” algorithm shaping all AI endeavors—an AI god, as it were.

What if, for example, all AI research initiatives included the requirement to ultimately increase human value and an understanding of how humans can best work toward peace, prosperity, and the sustainability of the planet?