APR 16, 2024 - An ongoing series.
Near the end of a long career during which I often focused in part on the impact of IT on the ways we work and live, I am currently following trends in artificial intelligence (AI) with great interest. The articles summarized here made me go, “hmmm.” That is an ambiguous word, to be sure, but I mean it to be so. Sometimes the “hmmm” resulted in crossed eyebrows and a frown, reflecting a response of, “That’s kind of weird” or, “That’s pretty weak.” Other times, an article stopped me in my tracks because it was teaching me things I had not thought about.
My selections are quite influenced by a background in theology, philosophy, religion, ethics and education, and my growing interest in the relevance of the humanities to AI. (P.S. I know many of these articles are behind a paywall, but I’ll depend on the savvy reader to get around that problem.)
Can ChatGPT get into Harvard? We tested its admissions essay.
By Pranshu Verma and Rekha Tenjarla, The Washington Post, January 8, 2024
Great idea for an article: Have ChatGPT generate college admissions essays, then get an experienced admissions counselor to comment. (I once sat on a graduate school admissions board, so I have some background, different as it was from an undergraduate evaluation.)
Here’s the approach: Give the reviewer one essay actually written by a student admitted into Harvard, and then another essay generated by ChatGPT. Would the reviewer recognize the difference?
In fact, the admissions counselor wasn’t fooled even for a second by the ChatGPT version. Why? Here are the results from the counselor’s review of a few of the essays:
Essay #1
· Student essay excerpt: “When I wasn’t nose-deep into ‘To Kill a Mockingbird’ … I wrote little pieces.”
· Reviewer’s comment: “Vague. Little pieces of about what?”
· Lesson to be learned: ChatGPT isn’t good at being specific.
· Mindrum’s take: My response would simply be, “Wow, that’s a poorly written sentence.”
Essay #2
· Student essay excerpt: “[I was] organizing poetry nights and later using that platform to talk about issues like systemic racism, sparking dialogues we’d long ignored.”
· Reviewer’s comment: “At this point, … I’m beginning to get a bit annoyed with the jump from important topic to important topic.”
· Lesson to be learned: ChatGPT often writes in a random way.
· Mindrum’s take: What gives this one away is, again, bad writing. What do you mean, “We’d long ignored?” Who is “we”? And “ignored” is flat-out the wrong word here.
Essay #3
· Student essay excerpt: “And in between planning, studying and making time for my odd hobby of collecting vintage postcards …”
· Reviewer’s comment: “Random! What’s the relevance here?”
· Lesson to be learned: AI often includes details that don’t fit.
· Mindrum’s take: This one was tough. I might well have seen this sentence as evidence of the “quirkiness” of the student, and I love quirky.
Readers’ comments to this article were quite interesting, reflecting very different responses to the ChatGPT essay. My favorite, though, was this: “Run these same ChatGPT essays by the admissions office at Cal State Chico, or Cal State Bakersfield and you'll be admitted to the Honors College.”
Apologies for the elitism here, but I think that’s exactly right. The article is about getting into Harvard. It doesn’t really speak to students applying to a respectable public university or equivalent. I suspect that there are quite a few ChatGPTers who will be starting at a solid, if not Ivy League, university come fall.
Science Is Becoming Less Human: AI is accelerating the pace of discovery—but at what cost?
By Matteo Wong, The Atlantic, December 15, 2023
The summary of the article at the beginning of this piece speaks of the amazing scientific advancements AI is enabling. But then a warning is sounded: “[T]hese advances have a drawback. AI, through its inhuman ability to process and find connections between huge quantities of data, is also obfuscating how these breakthroughs happen, by producing results without explanation. Unlike human researchers, the technology tends not to show its work—a curious development for the scientific method that calls into question the meaning of knowledge itself.”
According to the author, Matteo Wong, “Science has never been faster than it is today. But the introduction of AI is also, in some ways, making science less human. For centuries, knowledge of the world has been rooted in observing and explaining it. Many of today’s AI models twist this endeavor, providing answers without justifications and leading scientists to study their own algorithms as much as they study nature. In doing so, AI may be challenging the very nature of discovery.”
ChatGPT, for example, “changed how we can access and apply knowledge, but it simultaneously tainted much of our thinking with doubt. We do not understand exactly how generative-AI chatbots determine their responses, only that they sound remarkably human, making it hard to parse what is real, logical, or trustworthy, and whether writing, even our own, is fully human or bears a silicon touch. When a response does make sense, it can seem to offer a shortcut rather than any true understanding of how or why the answer came to be.” AI doesn’t show its work.
Another example: DeepMind, an AI research lab now based at Google. The author writes that DeepMind’s flagship scientific model, AlphaFold, has been able to find “the most likely structure of almost every protein known to science—some 200 million of them.” The program has been described as “revolutionary” (like compressing multiple Ph.D. research initiatives into a few seconds), but it’s another example of how AI may tend to generate deep mysteries, not explanations. Independent researchers have noted that “despite inhuman speed, the model does not fully explain why a specific structure is likely. As a result, scientists are trying to demystify AlphaFold’s predictions.”
The dominant theme of the article: “[E]ven as AI enables scientific work never before thought possible, those same tools pose an epistemic dilemma. They will produce groundbreaking knowledge while breaking apart what it means to know in the first place.”
The Monk Who Thinks the World Is Ending
Can Buddhism fix AI?
By Annie Lowrey, The Atlantic, June 25, 2023
Soryu Forall, a monk ordained in the Zen Buddhist tradition, has a dire prediction about the fate of humanity if AI is allowed to run amok without what can only be called a “spiritual” dimension.
At a remote monastery in Vermont, Forall leads retreats where he and attendees meditate on the threat of AI. His monastery is called MAPLE, which stands for the “Monastic Academy for the Preservation of Life on Earth,” and Forall is quite serious when he contends that AI is an existential threat to humanity.
“Human intelligence is sliding toward obsolescence,” writes the author of the essay, Annie Lowrey. “Artificial superintelligence is growing dominant, eating numbers and data, processing the world with algorithms.” There is no justification for just safely thinking that AI will preserve humanity, as if humans hold some special place in the cosmos. “Humans are already destroying life on this planet. AI might soon destroy us.”
Forall isn’t just a kook, however. He has come to the attention of many leading AI researchers. “Forall provides spiritual advice to AI thinkers, and hosts talks and ‘awakening’ retreats for researchers and developers, including employees of OpenAI, Google DeepMind, and Apple. Roughly 50 tech types have done retreats at MAPLE in the past few years.”
The monk’s vision is quite grand. He wants to influence the thinking of technologists as a means to influence technology. He also wants to change AI itself, “seeing whether he and his fellow monks might be able to embed the enlightenment of the Buddha into the code.”
This is actually all serious business to Forall, who thinks that creating an enlightened AI is “the most important act of all time.” We need to “build an AI that walks a spiritual path”— one that will “persuade the other AI systems not to harm us.” In fact, he argues that “we should devote half of global economic output—$50 trillion, give or take—to that.” We need to build an “AI guru”—an “AI god.”
I warmed to this essay finally, after a couple of readings. Although I make my living as a commentator on technology, not as an IT practitioner, I nevertheless wonder whether there actually could be an overarching or “ur” algorithm shaping all AI endeavors—an AI god, as it were.
What if, for example, all AI research initiatives included the requirement to ultimately increase human value and an understanding of how humans can best work toward peace, prosperity, and the sustainability of the planet?