Value insights: Can AI hallucinate?

The latest developments in artificial intelligence are impressive. However, as with investors, the way AI systems ‘think’ imposes limits on their abilities and leaves them prone to error.

Apr 13, 2023

10 minutes

Alessandro Dicorrado
Steve Woolley

One thing that seemingly everybody agrees on today is the need to get in on some ChatGPT action. Waves of AI-related excitement have flowed and ebbed before, but the accessibility and ease of use of ChatGPT have made this occurrence more tangible, relevant and worrying than ever before.

We should firstly make clear that, on any canonical definition, we know nothing about this topic. None of us is a computer scientist, and our interaction with ChatGPT goes no further than testing how good it is at writing stock recommendations (not very good) and non-investment-related rude stuff (quite good). However, we have an interest in how the world works, so we have borrowed some useful thoughts from people who know more than us. Three of the best books we have drawn on are listed at the bottom of this piece.

Most artificial intelligence today is based on circuits called neural networks. These are essentially systems of equations that define the ‘thought process’ of the AI. This thought process is generated by the machine itself through machine learning: scientists feed the computer lots of data (the ‘training set’) and let the machine figure out the patterns in it, which are then codified in a system of interrelated equations. ChatGPT’s creators fed a machine-learning algorithm the internet, and the algorithm came up with its best description of the written stuff on the web. When someone asks it something, ChatGPT optimises its ‘understanding’ of the question and its response by finding the sequence of words in its dataset that best matches its model. In mathematical terms, the best fit is the one that minimises the error in the system of equations.

The system has hundreds, if not thousands, of variables, and performing the required calculations in sequence (like a normal computer processor would) would take too long and require too much power. So the nice property of neural networks is that they are built with matrices, which means they are: a) parallelisable (i.e., calculations can be performed simultaneously, as opposed to in sequence); and b) differentiable (the derivative at any given point indicates whether the algorithm is going in the right direction or needs to take a different path – in other words, differentiation allows the system to minimise errors faster. When neural networks were first devised, the only chips capable of bulk parallel calculations were gaming chips, whose graphics requirements necessitated simultaneous rendering of details; hence Nvidia's head-start in this field. It is worth emphasising that the ‘error’ in this process means estimation error with respect to the model, not relative to the real world.

The above properties make neural networks good at inferring rules for problems on which we have lots of data, and where the task is specific and bounded. This is known as ‘narrow’ AI, and ChatGPT is an example of it. So is image recognition, the science of protein folding (which Google has figured out) and even the algorithm that optimises suggestions on social media. However, the ambition of almost anyone working on narrow AI today is to get to AGI – artificial general intelligence, meaning an intelligence capable of both tackling a variety of problems and continuously learning. The general narrative that currently surrounds AI draws a straight line between the achievements of narrow AI today (which are indeed remarkable) and the inevitability of this technology's eventual evolution into AGI. However, it appears this conclusion is misguided.

This is known as ‘narrow’ AI, and ChatGPT is an example of it.

An AI system called Galactica recently stated that Elon Musk died in a car crash in 2018. This is plainly wrong, and contradicted by plenty of data in the training set. So how could the system get it wrong? For one, despite being called ‘neural’, these systems do not work at all like a human brain. They are designed to solve a mathematical problem, but have no understanding of what they are processing – they have no real-world representation of the data they handle. Representation is the business of the brain, enabling it to excel at recursive thought (i.e., thoughts made of other thoughts). For example, commuting to work for a Londoner might require exiting the house, walking to the tube, getting on, getting off, walking to an office and arriving at a desk. The brain packages these things into one thought (going to work) and at each step draws on pre-formed concepts like door, street, tube and so on, which are broken down into further concepts (door handles, traffic lights, etc.). The brain does not need a comprehensive description of the objects and how they work every time the action is performed. It builds reference frames to represent objects and concepts in the world, so that words are more than just numerical entries, representing real-world concepts. The brain is said to have an ‘expressive’ programming language.

Software programming languages like Python are also expressive, but they are limited by what we code into them. The point of AI is that it should not be defined by what we tell it, but should be able to ‘think’. Therefore, AI must be built as a circuit (just as the brain is a circuit of neurons), a processing system independent of software. However, the language of neural networks is inexpressive – it doesn't do concepts. So in order to translate representation into circuitry, we need to describe everything to it. This works for problems that can be completely solved. A calculator is a great example of a full description of the task. So is chess, a game that, given enough processing power can be solved. Today, it is essentially impossible for a human to beat a machine at chess.

Once we move beyond tasks that can be solved from beginning to end, the number of rules necessary to describe the problem explodes to impractical levels. Computer scientists therefore invented machine learning as a way for machines to figure out the rules themselves. To do this, they need a vast set of data that covers all possible variations of the task, so that the machine can infer a complete set of rules. This state of affairs is rarer than you might think. The lack of conceptual understanding means that neural networks are remarkably susceptible to errors if their dataset is incomplete or of insufficient quality, which is most of the time.

Take image recognition, where algorithms have supposedly reached superhuman levels. It turns out that they are hilariously vulnerable to tiny tweaks to the images they are presented with, even though the changes might be invisible to the human eye. Change just a few pixels and the circuit mistakes a school bus for an ostrich (https://www.sciencediplomacy.org/article/2022/weapon-mistook-school-bus-for-ostrich). The consequences of failure in image recognition are particularly obvious in applications such as driverless cars. Or take the recent news that, six years after we abandoned any hope of human superiority in the ancient boardgame Go when AlphaGo beat the world champion, just a month ago an average player defeated a ‘superhuman’ computer. Comprehensively, we might add, and it was not a fluke. It happened because the possible variations in GO are multiples of those in chess (https://arstechnica.com/information-technology/2023/02/man-beats-machine-at-go-in-human-victory-over-ai/).

Even ChatGPT, impressive as it is, fails to generalise. Ask it to do a simple sum like 28+42, and it will return 70, because it has millions of examples on the internet of this particular sum. But ask it something it has never seen before – say a three- or four-digit addition problem, particularly one that involves carrying – and it might fail, even though there is plenty of content on the internet that explains how to do this. Move on to three- or four-digit multiplications and its failure rates skyrocket. It is much the same if you try to play chess with it: ChatGPT has plenty of grandmaster chess games in its database, and it therefore approaches chess as a sequence of letter-and-number notations (Knight to C3, Queen to D5 and so on), because that is what a chess game looks like when it is written down. But it has no idea what these notations relate to, and it has no idea that there is a chess board with pieces on it that are trying to checkmate each other. The consequence is that when you ask ChatGPT to play chess, it will routinely make completely illegal moves – again, even though there are plenty of tutorials online on how to play chess. The same errors are almost certainly present in the other language-generation problems that ChatGPT is asked to tackle: it just doesn't know that there is a world out there and that there are things that are true and false about that world. It just has its database.

We tend to anthropomorphise these systems, but their ‘human-ness’ is an illusion. When they get a problem right we get excited. But the reality is that the systems have not figured out anything at all; they just compute combinations of letters and numbers that closely fit their algorithms. Sometimes they happen to have the right answer, sometimes not.

We tend to anthropomorphise these systems, but their ‘human-ness’ is an illusion.

A pressing issue is that we do not appear capable of predicting where the failure points are. Because an inexpressive language requires an enormously complicated and overly bulky description of every single item, there are going to be blind spots. The rules of Go, written in the Python programming language, are a page long, but they are millions of pages long when written in circuitry. To learn those millions of pages in an inexpressive language requires billions of experiences (hence the enormous dataset), but that depth of data does not exist for all the topics we might want to teach an AI. The universe does not scale that way: there simply is not enough material on the planet to construct a computer big enough to run a neural network with the artificial general intelligence of a five-year-old child, let alone an adult. In turn, this means that most AI datasets are going to be incomplete, and hence riddled with errors whose location we are oblivious to. Not only do we have no transparency into the various possible paths of the circuit, but more importantly, there is no fool-proof debugging system. All we can do is feed it more data and hope that the system figures out its blinds spot.

If we are going to get to artificial general intelligence, it seems, we need an expressive language through which the circuit can understand concepts. However, those currently working on this philosophical strain of AI are a small minority, and success appears some way off. Despite this, it seems clear that society had better figure out a coordinated way to deal with the current AI developments, narrow as they may be. It is, for example, somewhat unsettling that most AI research is currently conducted within the big tech companies, which routinely engage in levels of public experimentation that probably should not be left entirely to corporate boardrooms.

Secondly, if they have no representation of the real world at all, these systems cannot validate the information they are being asked to generate. This means that one could easily have a system make data up (scientific papers, pictures, deep-fakes, and so on). The consequences of this sort of data generation on trust, democracy and the fabric of society are really quite worrying.

There is plenty to be excited about in the latest AI developments, but it is also important to think about where their applicability is feasible and reliable, and where it might instead just hallucinate.

Three books we have found useful and accessible to non-technical readers (i.e., us):
  • A Thousand Brains: A New Theory of Intelligence, by Jeff Hawkins
  • Human Compatible: AI and the Problem of Control, by Stuart Russell
  • Rebooting AI: Building Artificial Intelligence We Can Trust, by Gary Marcus.

This is not a buy, sell or hold recommendation for any particular security.

Authored by

Alessandro Dicorrado
Steve Woolley

Important Information

This communication is provided for general information only should not be construed as advice.

All the information in is believed to be reliable but may be inaccurate or incomplete. The views are those of the contributor at the time of publication and do not necessary reflect those of Ninety One.

Any opinions stated are honestly held but are not guaranteed and should not be relied upon.

All rights reserved. Issued by Ninety One.

For further information on indices, fund ratings, yields, targeted or projected performance returns, back-tested results, model return results, hypothetical performance returns, the investment team, our investment process, and specific portfolio names, please click here.