Can ChatGPT edit fiction? Four editors put it to the test
Editors Rose Michael, Sharon Mullins, Renée Otmar and Katherine Day have previously explored how AI tools might work, or be made to work, for editorial. In this extract from an essay in the Conversation, they ask ChatGPT to edit a short story by Michael. In a first round of editing, ChatGPT gives ‘the sort of suggestions an editor might write in response to almost any text—not particularly specific to this story, or to our stated aim of submitting it to a literary publication’.
Next, we provided a second prompt, responding to ChatGPT’s initial feedback—attempting to emulate the back-and-forth discussions that are a key part of the editorial process.
We asked ChatGPT to take a more practical, interventionist approach and rework the text in line with its own editorial suggestions:
Thank you for your feedback about uneven pacing. Could you please suggest places in the story where the pace needs to speed up or slow down? Thank you too for the feedback about imagery and description. Could you please suggest places where there is too much imagery and it needs more action storytelling instead?
That’s where things fell apart.
ChatGPT offered a radically shorter, changed story. The atmospheric descriptions, evocative imagery and nods towards (unspoken) mystery were replaced with unsubtle phrases—which Rose swears she would never have written, or signed off on.
Lines added included: ‘my daughter has always been an enigma to me’, ‘little did I know’ and ‘a sense of unease washed over me’. Later in the story, this phrasing was clumsily suggested a second time: ‘relief washed over me’.
The author’s unique descriptions were changed to familiar clichés: ‘rugged beauty’, ‘roar of the ocean’, ‘unbreakable bond’. ChatGPT also changed the text from Australian English (which all Australian publications require) to US spelling and style (‘realization’, ‘mom’).
In summary, a story where a mother sees her daughter as a ‘southern selkie going home’ (phrasing that hints at a speculative subtext) on a rocky outcrop and really sees her (in all possible, playful senses of that word) was changed to a fishing tale, where a (definitely human) girl arrives home holding up, we kid you not, ‘a shiny fish’.
It became hard to give credence to any of ChatGPT’s advice.
Esteemed editor Bruce Sims once advised it’s not an editor’s job to fix things; it’s an editor’s job to point out what needs fixing. But if you are asked to be a hands-on editor, your revisions must be an improvement on the original—not just different. And certainly not worse.
It is our industry’s maxim, too, to first do no harm. Not only did ChatGPT not improve Rose’s story, it made it worse.
What did the human editors do?
ChatGPT’s edit did not come close to the calibre of insight and editorial know-how offered by Overland editor Claire Corbett. Some examples:
There’s some beautiful writing and fantastic themes, but the quotes about drowning are heavy-handed; they’re given the job of foreshadowing suspense, creating unease in the reader, rather than the narrator doing that job.
The biggest problem is that final transition—I don’t know how to read the narrator. Her emotions don’t seem to fit the situation.
For me stories are driven by choices and I’m not clear what decision our narrator, or anyone else, in the story faces.
It’s entirely possible I’m not getting something important, but I think that if I’m not getting it, our readers won’t either.
Freelance editor Nicola [Redhouse], who has a personal relationship with Rose, went even further in her exchange (in response to the next draft, where Rose had attempted to address the issues Claire identified). She pushed Rose to work and rework the last sentence until they both felt the language lock in and land.
I’m not 100% sold on this line. I think it’s a little confusing … It might just be too much hinted at in too subtle a way for the reader.
Originally, the final sentence read: ‘Ready to make my slower way back to the house, retracing—overwriting—any sign of my own less-than more-than normal prints.’
The final version is: ‘Ready to make my slower way back to the house, retracing, overwriting, any sign of my own less-than, more-than, normal prints.’ With the addition of a final standalone line: ‘I have seen what I wanted to see: her, me, free.’
Claire and Nicola’s feedback shows how an editor is a story’s ideal reader. A good editor can guide the author through problems with point of view and emotional dynamics—going beyond the simple mechanics of grammar, sentence length and the number of adjectives.
In other words, they demonstrate something we call editorial intelligence.
Editorial intelligence is akin to emotional intelligence. It incorporates intellectual, creative and emotional capital—all gained from lived experience, complemented by technical skills and industry expertise, applied through the prism of human understanding.
Skills include confident conviction, based on deep accumulated knowledge, meticulous research, cultural mediation and social skills. (After all, the author doesn’t have to do what we say—ours is a persuasive profession.)
Round 2: the revised story
Next, we submitted a revised draft that had addressed Claire’s suggestions and incorporated the conversations with Nicola.
This draft was submitted with the same initial prompt: ‘Hi ChatGPT, could I please ask for your editorial suggestions on my short story, which I’d like to submit for publication in a literary journal?’
ChatGPT responded with a summary of themes and editorial suggestions very similar to what it had offered in the first round. Again, it didn’t pick up that the story had already been published, nor did it clearly identify the genre.
For the follow-up, we asked specifically for an edit that corrected any issues with tense, spelling and punctuation.
It was a laborious process: the 2500-word piece had to be submitted in chunks of 300–500 words and the revised sections manually combined.
However, these simpler editorial tasks were clearly more in ChatGPT’s ballpark. When we created a document (in Microsoft Word) that compared the original and AI-edited versions, the flagged changes appeared very much like a human editor’s tracked changes.
But ChatGPT’s changes revealed its own writing preferences, which didn’t allow for artistic play and experimentation. For example, it reinstated prepositions like ‘in’, ‘at’, ‘of’ and ‘to’, which slowed down the reading and reduced the creativity of the piece—and altered the writing style.
This makes sense when you know the datasets that drive ChatGPT mean it explicitly works toward the word most likely to come next. (This might be directed differently in the future, towards more creative, and less stable or predictable models.)
Round 3: our final submission
In the third and final round of the experiment, we submitted the draft that had been accepted by Meanjin.
The process kicked off with the same initial prompt: ‘Hi ChatGPT, could I please ask for your editorial suggestions on my short story, which I’d like to submit for publication in a literary journal?’
Again, ChatGPT offered its rote list of editorial suggestions. (Was this even editing?)
This time, we followed up with separate prompts for each element we wanted ChatGPT to review: title, pacing, imagery/description.
ChatGPT came back with suggestions for how to revise specific parts of the text, but the suggestions were once again formulaic. There was no attempt to offer—or support—any decision to go against familiar tropes.
Many of ChatGPT’s suggestions—much like the machine rewrites earlier—were heavy-handed. The alternative titles, like ‘Seaside Solitude’ and ‘Coastal Connection’, used cringeworthy alliteration.
In contrast, Meanjin’s editor Tess Smurthwaite—on behalf of herself, copyeditor Richard McGregor, and typesetter Patrick Cannon—offered light revisions:
The edits are relatively minimal, but please feel free to reject anything that you’re not comfortable with.
Our typesetter has queried one thing: on page 100, where ‘Not like a thing at all’ has become a new para. He wants to know whether the quote marks should change. Technically, I’m thinking that we should add a closing one after ‘not a thing’ and then an opening one on the next line, but I’m also worried it might read like the new para is a response, and that it hasn’t been said by Elsie. Let me know what you think.
Sometimes editorial expertise shows itself in not changing a text. Different isn’t necessarily good. It takes an expert to recognise when a story is working just fine. If it ain’t broke, don’t fix it.
It also takes a certain kind of aerial, bird’s-eye view to notice when the way type is set creates ambiguities in the text. Typesetters really are akin to editors.
The verdict: can ChatGPT edit?
So, ChatGPT can give credible-sounding editorial feedback. But we recommend editors and authors don’t ask it to give individual assessments or expert interventions any time soon.
A major problem that emerged early in this experiment involved ethics: ChatGPT did not ask for or verify the authorship of our story. A journal or magazine would ask an author to confirm a text is their own original work at some stage in the process: either at submission or contract stage.
A freelance editor would likely use other questions to determine the same answer—and in the process of asking about the author’s plans for publication, they would also determine the author’s own stylistic preferences.
Human editors demonstrate their credentials through their work history, and keep their experience up-to-date with professional training and qualifications.
What might the ethics be, we wonder, of giving the same recommendations to every author asking for editing advice? You might be disgruntled to receive generic feedback if you expect or have paid for individual engagement.
As we’ve seen, when writing challenges expected conventions, AI struggles to respond. Its primary function is to appropriate, amalgamate and regurgitate—which is not enough when it comes to editing literary fiction.
Literary writing aims to—and often does—convey so much more than what the words on screen explicitly say. Literary writers strive for evocative, original prose that draws upon subtext and calls up undercurrents, making the most of nuance and implication to create imagined realities and invent unreal worlds.
At this stage of ChatGPT’s development, literally following the advice of its editing tools to edit literary fiction is likely to make it worse, not better.
In Rose’s case, her oceanic allegory about difference, with a nod to the supernatural, was turned into a story about a fish.
ChatGPT is ‘like the new intern’
This experiment shows how AI and human editors could work together. AI suggestions can be scrutinised—and integrated or dismissed—by authors or editors during the creative process.
And while many of its suggestions were not that useful, AI efficiently identified issues with tense, spelling and punctuation (within an overly narrow interpretation of these rules).
Without human editorial intelligence, ChatGPT does more harm than help. But when used by human editors, it’s like any other tool—as good, or bad, as the tradesperson who wields it.
Image: Kelly Sikkema on Unsplash.
Category: Features