As I sat before my computer, preparing to write this article, staring at the dreaded blank page on the screen, it briefly crossed my mind that I could call on ChatGPT, the OpenAI chatbot, for help. I could tell it to write an introductory paragraph in a certain style – much like Annie Lowrey did as a test in a recently published piece in The Atlantic – and hope that it would be enough to stimulate the ideas to follow (Lowrey, 2023).
And it probably would have worked.
Writers around the world have entered a slow-moving panic attack as ChatGPT proves time and time again that it’s capable of creating articles and stories that are eerily human. And it isn’t only writers that are concerned, but those who receive the work of writers: online magazines, book publishers, and the like. It’s become all too easy to use AI technology as a writing service, which is justifiably concerning.
But there is also reason to think that writing is a very human thing, and one that is unlikely to be entirely upended by artificial intelligence any time soon.
The limitations of LLM
In a recent New York Times guest essay written by Noam Chomsky, Ian Roberts, and Jeffrey Watumull, ChatGPT is described as “a lumbering statistical engine for pattern matching, gorging on hundreds of terabytes of data and extrapolating the most likely conversational response or most probable answer to a scientific question” (Chomsky et al., 2023). That’s a linguistically heavy yet accurate description of how technology such as ChatGPT function.
Chatbots work by drawing from a large language model (or L.L.M. for short), systems that learn by analyzing large amounts of digital text. “As it analyzes that sea of good and bad information from across the internet, an L.L.M. learns to...guess the next word in a sequence of words” (Metz, 2023). The system is impressive in its ability to stimulate fact-based responses to certain questions, but analyzing data doesn’t allow the chatbot to develop imagination or creativity, only to replicate it.
To drive this point home, in a response to an attempt by one of his blog readers to write a song “in the style of Nick Cave” using ChatGPT, Cave wrote:
“Songs arise out of suffering, by which I mean they are predicated upon the complex, internal human struggle of creation and, well, as far as I know, algorithms don’t feel. Data doesn’t suffer. ChatGPT has no inner being, it has been nowhere, it has endured nothing, it has not had the audacity to reach beyond its limitations, and hence it doesn’t have the capacity for a shared transcendent experience, as it has no limitations from which to transcend. ChatGPT’s melancholy role is that it is destined to imitate and can never have an authentic human experience, no matter how devalued and inconsequential the human experience may in time become.” (Cave, 2023)
Cave’s point is that much (if not all) of what we create grows from difficult, enlightening, or otherwise deeply felt experiences. Writing – whether it’s writing a song, a book, a poem, or a letter – is an act of creation and must, according to Cave, be born of experience. Otherwise, it is simply “destined to imitate.”
Imitation is not creation
Because chatbots learn by analyzing text that humans have posted on the internet, their knowledge is forever limited by the past. All they can do is reflect back to us things we already know, even if it does so by explaining it in a different way. It does not have the ability, as Chomsky, Roberts, and Watumull point out, “to say not only what is the case, what was the case and what will be the case... but also what is not the case and what could and could not be the case" (Chomsky et al., 2023). Because of this, they argue, it lacks true intelligence.
And true intelligence, it seems, is absolutely necessary when tackling the art of writing. Why? Because when you write, you draw on what you have read and learned in the past, but then you go further. You push beyond what you know and believe in order to develop new and innovative ideas. This requires an ability to conjecture what could happen and what could not. Fiction authors, for example, can only write their stories and books if they have a deep understanding of what could happen if “x” were the case.
Gareth St John Thomas, the founder of Exisle Publishing, echoed this point when asked about the impact AI tech may have on the quality of book submissions: “The kind of ‘writing’ AI tech seems to be good for is not the material of books, but rather of unread academic papers, rules about carparking, and things local councils may get excited about... I don’t imagine any serious author will want to use the technology for work under their own name” (St John Thomas).
Why AI will still cause problems for publishers
Since the release of ChatGPT, publishers have seen a troubling increase in submissions. Neil Clarke, the Editor of Clarkesworld Magazine, recently published a post on his blog explaining that submissions for February 2023 were nearly two-fold those of January 2023, with nearly 40% of total submissions being machine-written (Clarke, 2023). The uptick in submissions is problematic and actually forced Clarke to close submissions for a time so that he could get a handle on the problem. However, he says that the AI-generated submissions are not difficult to spot. Not wanting to give those who submit those pieces a leg up, all he would say is that the machine-written submissions followed “very obvious patterns,” making them easy to detect.
The problem seems, for now, to be mostly relegated to online magazines that specialize in short-form pieces and leave submissions open year-round. Books are more difficult to imitate, as they tend to pull from many sources, not all of which are text-based. And books are, of course, much longer. Patterns that could possibly go undetected in short-form writing would almost certainly become more apparent in long-form pieces.
For the time being, then, the biggest problem publishers may face is simply the time used to weed out the AI stories. It may also create problems for legitimate authors if publishers begin to restrict submissions in order to avoid the problems brought on by the deluge of “fake” writing.
How AI may make us better writers
St John Thomas actually sees some promise in the emergence of AI. “It can do some useful things,” he says, “and it encourages us to lift our game, to bring our human essence and individuality to the fore” (St John Thomas).
It can, indeed, do some useful things. Industries are finding it particularly useful to write captions or factual descriptions of, say, a particular product or an apartment that’s up for sale. And, as I mentioned at the start of this piece, it can help writers generate ideas for their writing. But it can’t replace human ingenuity.
New chatbot technology is bringing some uncomfortable questions to the forefront, not least of which is the question of what actually makes us human. That’s a question that won’t get answered in one generation, but whatever our “human essence” is, we’ll need to draw on it more and more as our writing competes with ever-improving AI tools. The competition may not be all that difficult for now, which gives us time to tap into our sense of humanity and individuality; it’s no question that we’ll need it if we are to continue to produce writing that can’t be replicated by AI writing tools.
Let’s ask the machine
Chatbots are amazingly efficient at sorting through large amounts of data and spitting out an answer that makes factual sense. So, to make sure the ideas in this piece lined up with larger consensus on the impact AI may have on writing and publishing, we decided to talk to a chatbot. We asked ChatGPT for its answer to the question of what kind of impact new AI technology may have on the quality of book submissions and writing in general. Here’s what it wrote:
While AI technology can certainly assist in the writing process by providing suggestions or generating ideas, it's important to remember that writing ultimately comes from human imagination and experience. AI can help with the mechanics of writing, but it cannot replace the depth and complexity of human emotion and storytelling. As such, I believe that AI will not have a significant impact on the overall quality of book submissions or writing in general, but it may change the way that writing is approached and executed.
I read that answer, and then reread it, wondering why, if ChatGPT lacks personality and genuine insight, I found myself comforted by the fact that it also seems to believe that writing remains a very human thing.
Sources Cited
Cave, N. (2023, February 21). ISSUE #218 / JANUARY 2023. The Red Hand Files. Retrieved March 8, 2023, from https://www.theredhandfiles.com/chat-gpt-what-do-you-think/
Chomsky, N., Roberts, I., & Watumull, J. (2023, March 8). Noam Chomsky: The false promise of ChatGPT. The New York Times. Retrieved March 8, 2023, from https://www.nytimes.com/2023/03/08/opinion/noam-chomsky-chatgpt-ai.html?te=1&nl=the-morning&emc=edit_nn_20230308
Clarke, N. (2023, February 15). A concerning trend. Neil Clarke. Retrieved March 8, 2023, from http://neil-clarke.com/a-concerning-trend/
Lowrey, A. (2023, February 14). How CHATGPT will destabilize white-collar work. The Atlantic. Retrieved March 8, 2023, from https://www.theatlantic.com/ideas/archive/2023/01/chatgpt-ai-economy-automation-jobs/672767/?te=1&nl=the-morning&emc=edit_nn_20230302
Metz, C. (2023, February 26). Why do A.I. chatbots tell lies and act weird? look in the mirror. The New York Times. Retrieved March 8, 2023, from https://www.nytimes.com/2023/02/26/technology/ai-chatbot-information-truth.html?te=1&nl=the-morning&emc=edit_nn_20230302
St John Thomas, G. (2023, March 7). personal.