Nigel George is an educator and author of five books on technology and self-publishing. In the first instalment of his series on AI tools, Nige reframes what it is these ‘dumb machines’ are—including what they are and are not capable of doing—to help demystify what roles AI could play in the publishing process.
‘Artificial Intelligence’ is an Oxymoron
One of the distinct advantages of being a tech head and a creative head is when something new and bright and shiny comes along, tech brain can dig in and calmly try understand things, even when creative brain is busy screaming the apocalypse.
There is no doubt that the latest iteration of natural language AI tools are impressive.
The uncanny ability of tools like ChatGPT to supply intelligent-sounding answers to everyday questions has certainly stolen more headlines and chewed up more hysterical column inches than a certain ex-President.
Something important is getting lost in the noise though—there is nothing artificial about the intelligence behind these tools.
In this part one of a three-part series, I’d like to step back from the bleeding edge and hysterical noise and reframe AI into a non-technical and very human context.
What is AI exactly?
As clever as it may sound, an AI is still a dumb machine. Sophisticated and impressive, without doubt, but still a machine that cannot do what it wasn’t programmed to do.
The concept of a ‘thinking machine’ is a very old idea.
Before we invented digital computers, machines could only do what we physically designed them to do. Once we had the computer, however, the idea of creating machines that could vary their function based on what was programmed into the machine took hold.
We hadn’t created a machine that could think, but we did create one that could learn.
The problem was, computers could only talk with ones and zeros. Except for quantum computers (they exist, but aren’t very useful at this stage), this is still the case today.
Humans came up with this cool idea that it would be great if we could communicate with computers using everyday language. Natural Language Processing (NLP) was born.
Unfortunately, it wasn’t long before we realised computers at the time were incapable of interpreting natural language in any useful way.
Fast forward a couple of decades and computers finally got powerful enough to do useful things with NLP. We also learned along the way that the secret to understanding human language was not just understanding the meaning of words, but how those words are used in context.
This was our next hurdle to overcome because computers suck at context.
Despite the uncanny ability of modern AIs to sound like they understand what we are asking them, they don’t have a clue what they are saying. They rely on sophisticated mathematical and predictive techniques to spit out the most likely response to a question.
These responses are not intuition or experience on the part of the AI. They’re actually vast databases of parts of a word called a token. More on tokens soon.
Until 2017, we relied on a kind of neural network to glean the meaning from a phrase or sentence so the AI could give a sane response more often than not. This technique proved quite useful. It’s a technique still used in many day-to-day tools, like handwriting recognition and speech recognition.
The problem with this technique is that on longer statements it has a habit of losing the context.
We tweaked this technique for years, but the real breakthrough happened in 2017 when some brilliant humans at Google invented the multi-head transformer.
Despite sounding like a Hasbro trademark, it provided a way to interpret much longer natural language statements. GPT (and others) were born.
GPT stands for Generative Pre-trained Transformer. The secret sauce is the ‘pre-trained’ bit.
AIs are not intelligent as we understand it. They are word prediction machines that rely on immense token databases to calculate the most likely response to any question. And I do mean immense—GPT-3 can store 176 billion different token relationships (called trainable parameters in tech lingo) built from a database of roughly 500 billion tokens. GPT-4 reportedly has a trillion or more trainable parameters.
The people behind GPT built these databases by basically scraping the entire internet. The algorithm in GPT then trained itself (with the help of humans) using this vast database to build mega lists of appropriate responses to just about anything we might ask it.
But there was a problem—GPT is not a human, it is a computer program running on massive computer networks. It doesn’t have a moral compass; it has no sense of wrong or right.
It doesn’t have the slightest clue what it just said to you.
This is because what the program outputs is just a string of bits of words from the AI’s database mashed together in a way the program says has a high probability of forming the words you want to hear.
Because they are trained using all the good and bad things we put on the internet, AIs can say extremely hateful and hurtful things. AI’s also have a habit of ‘hallucinating’, which is just a nice way of saying ‘totally making shit up’. This should come as no surprise when you understand one of the major sources of training data for GPT was Reddit …
To address this issue, the AI developers started round two of the training. Round two comprised hiring a bunch of contractors to teach the AI the most appropriate response to a set of thousands of questions.
To keep it open and random, the AI developers extracted these questions from the hundreds of thousands of questions actual humans had already asked the AI. They also programmed in a bunch of new rules to stop the AI from providing hateful and hurtful comments.
Which is roughly where we are at today.
For the sake of clarity and brevity, I have skipped a lot of detail and taken a bit of creative licence at the risk of offending the tech pedants out there, but the point of this intro is to highlight the very human origins of the latest natural language AIs, and their limitations.
Going right back to the beginning—it’s still just a dumb machine.
We have without doubt created machines that are immensely powerful at learning, but they still can’t think for themselves.
Despite the whole internet of information and a trillion learning opportunities, an AI still can’t outstrip your average four-year-old in knowing it’s not OK to lie.
That robot assembling your car is not wondering what it might get up to on the weekend while it welds.
Ultimately, it is human ingenuity (and hopefully human morality and ethics!) that will decide how far we go with AI.
AIs can already produce pretty decent copy for a range of writing tasks. There is no doubt natural language AIs will get to where they can produce a novel that is a reasonable facsimile of a novel written by a human.
But it will never be a human. An AI can mimic Shakespeare, but it will never be Shakespeare.
This is why, despite the challenges they pose, I am broadly positive about living in a world with AIs that often write better than us.
Humans haven’t been able to beat chess AIs for years, but it hasn’t killed the game of chess. In fact, most serious players now use the chess AIs as analysis tools to improve their game.
This is where I see a great opportunity for writers—using these AIs to improve and to grow as writers.
In part two of this series, I will explore some challenges and opportunities presented to writers by the latest natural language AIs.
Nigel George is an entrepreneur and manager with 25+ years’ experience building and managing technology companies. He is the author of five books on technology and self-publishing. He has independently published his books since 2015 and is an expert on how to build and run a successful independent publishing business. Nigel is passionate about passing on his expertise to other authors, teaching them how to succeed as an independent author. You can learn more about his work on his website.