Shortly after the completion of my on-line book portion of this project—and in response to a query from one of my very first readers—I wrote a post about AI. My reader was curious about my critique of catechism as a learning tool and whether an artificial intelligence can, in fact, actually learn. So I explored how the training methods these software programs employ are fundamentally different from the ways that brains learn knowledge and understanding, and I explained how the prescriptive, repetitive process an AI uses to master a topic or skill is very dissimilar to how the brain works and that that is why AI can’t meet the very precise definition in biology for intelligence: namely, the capacity to solve novel problems.
My, how things appear to have changed!
Enter ChatGPT and its brethren: AIs that can write apparently unique essays and reports so that you no longer have to, and with software available to read aloud what the GPTs write, we won’t even have to concern ourselves with the potential illiteracy problem I introduced last time. The computers will take care of it all!
Or will they? As I said in Sound It Out, it’s enough to keep this educator up at night. But I also said that I think all is not quite lost (nor maybe even what it seems), and I promised to reveal why. Here goes:
For starters, as I pointed out in my original posting about AI, these programs can mimic some highly sophisticated behaviors, but they can’t originate them. ChatGPT can appear to be producing unique, new material, but the reality is that what looks “original” is merely a compilation of writing that already exists in the enormous database to which the software has access and which it has been trained to recognize as a “correct” response to a particular question. In other words, it can mimic a Toni Morrison or a Taylor Swift because it has every word or lyric either ever wrote (at least so far). But this AI cannot be Toni Morrison or Taylor Swift. It cannot create a uniquely new, never-heard-before rendering of their voices. Hence, while ChatGPT might compose a Tay Tay parody, it cannot produce her next exclusive outpouring of words reflecting her own individual emotions. Ms. Swift’s career is most definitely not in any immediate danger.
What is in danger are those fields of writing where genuinely novel responses are unnecessary—where the word bank and the grammar silo are already large enough. Thus, if you are a textbook author, advertising copyist, contract lawyer, or other writer of the equivalent of instruction manuals, your day-job may be at risk. In fact, a good friend of mine who has her own internet start-up has shared with me how excited she and her partner are by the release of ChatGPT because of how much money it will save them on authoring the on-line text they need for their business. So it isn’t that this new AI won’t be disruptive to the act and role of writing in society; it’s just that it won’t disrupt all writing everywhere (and I’ll save the ethics of my friend’s bottom-line capitalism for another day).
The second reason I don’t think all is lost to the AIs of this world is that the drive to create seems almost hard-wired into our brains as much as the hippocampus’ hunger for new stimulus. I have only to observe the many students in the commons space directly outside my classroom spending countless hours making Tik Tok videos to know that the urge to create is alive and well (indeed, as their teacher, I wish I could get them to see that if they studied with the same effort and investment, they’d all be future Rhodes Scholars). What all their creativity reminds me of, though, is my 10th grade English teacher who brilliantly understood that one of the best ways to get buy-in from his students who were not intrinsically motivated academic learners was to help them see how learning to write made them more creative at the things about which they did care. I can remember him reaching out to the kids whose passion was music, helping them to see that “the essay you just wrote will make you a better composer.” Or the football players in the room who discovered “that when you learn to make proper patterns with the words, you can make better patterns in your plays.” You name a population of student interest, and he knew how to hook them into writing.
Thus, because learning to write is one of the simplest ways to learn to create—and because I’m confident that there are at least a million—if not a billion—children out there who want to be the next Pentatonix, Patrick Mahomes, or Guy Fieri—I am optimistic that ChatGPT will not be the end of high school English after all. It will require rethinking how we teach it, how we assess it, and how we motivate those learning to do it. But truly original writing is likely to remain the domain of the brain just as much as did the truly creative thinking it took to build ChatGPT in the first place.
Which brings me to my third reason why I do not think that this new AI is quite as demonic as first feared, and to understand this reason fully, my readers will need to take the time to listen to the very end of the final episode of Sold a Story. Throughout this series, its primary investigator, Emily Hanford, interviews not just parents who are struggling with the reading crisis but their young children as well. She then employs these very same children to read every single one of the final credits for the podcast, and the recording of these developing voices as they wrestle with sounding out the unfamiliar names and words—with supportive parents clearly guiding in the background—is truly joyous. You can hear the delight in each child’s voice at the sense of power their ability to read provides them, and you can almost feel their love of the act of reading in the cadence of their tones. If you need some optimism for the future of the written word, go listen to these kids!
However, that same optimism leads me to the cautionary part of this essay, and that is that artificial intelligence programs such as ChatGPT, self-parking cars, cancer screening software—and nearly every appliance in your house—are here to stay. Ubiquitous AI is now simply part of the fabric of 21st Century existence, and therefore, as Kevin Roose of The New York Times wisely points out, our children are going to need to learn how to live with these programs as integral parts of their lives. He suggests that we deliberately employ ChatGPT as one of our educational tools precisely because we need students to know how to recognize fact from fiction, truth from falsehood, functional from garbage. ChatGPT is already being employed to create both information and disinformation on-line, and our children are only going to learn how to tell the difference if they interact with the AIs that can generate both.
That, by the way, is why we do not yet have fully autonomous self-driving cars: we have witnessed firsthand the limits of the AI responsible for them and know to proceed with caution. With ChatGPT here to stay, we are going to need to do the same in our schools.
References
Hanford, E. (2022) Sold a Story: How Teaching Kids to Read Went So Wrong. Minnesota Public Radio. https://features.apmreports.org/sold-a-story/.
Herman, D. (Dec. 9, 2022) The End of High School English. The Atlantic. https://www.theatlantic.com/technology/archive/2022/12/openai-chatgpt-writing-high-school-english-essay/672412/.
Klepper, D. (Jan. 28, 2023) AI-powered Tools Have Ability to Create Propaganda and Lies. The Baltimore Sun. https://digitaledition.baltimoresun.com/html5/desktop/production/default.aspx?&edid=d3e7db0e-e4b1-479d-9030-364b0920bd8d.
Roose, K. (Jan. 12, 2023) Don’t Ban ChatGPT in Schools. Teach with It. The New York Times. https://www.nytimes.com/2023/01/12/technology/chatgpt-schools-teachers.html.
After learning about this new AI capability I was discouraged but this article gives me hope again through some excellent observations of the human spirit and drive to be creative. Well said.
LikeLike