Until recently, humans could take pride in their unique abilities. No other animals play board games, write essays, or prove mathematical theorems. However, recent advances in artificial intelligence (AI) challenge our self-image as the most intelligent beings. AI systems now beat us at complex games, produce polished prose, and win mathematics medals. Tech CEOs promise superhuman AI is imminent. So, in an age of AI, are human minds still special, or merely also-rans?
Intelligence Is Not a Single Scale
Discussing superhuman AI assumes intelligence is a single scale. Parents often mark children's heights on a doorframe, watching younger siblings catch up and eventually surpass older ones. This moment feels similar as we observe AI systems potentially overtaking us. However, intelligence is not like height. There is only one way to be tall, but many ways to be smart. Other animals illustrate this: birds navigate, ants cooperate, and spiders hunt, each shaped by their environment to be smart differently.
Humans are no different. Our minds are shaped by our biology. We live only a few decades, learning and doing everything in that short time, guided by a kilogram of neurons inside our skulls. We share thoughts by making mouth noises or tapping fingers. AI systems face none of these constraints. They process more data than a human sees in a lifetime, expand capacity with more computers, and easily share information with other machines.
Limitations That Make Us Special
Our short lives, squishy brains, and mouth noises might seem like limitations compared to machines, but they are what make us special. Human intelligence is a response to these limitations. To make the most of our lives, we have an amazing ability to learn from limited experience. Yes, AlphaGo beats the best human Go players, but it trained on many human lifetimes of games. ChatGPT holds reasonable conversations but draws on thousands of years of language. No AI system can produce sentences with the creativity of a human five-year-old exposed to the same data.
Our limited brains and communication abilities also drive us. We cannot spin up another computer for more processing power, so we must recognize patterns and use attention wisely. Relying on mouth noises is challenging; we created tools like language, writing, teaching, and science to pool knowledge across people and time. This requires us to think about others' minds and work together for shared goals.
Different Constraints, Different Solutions
Because humans and machines face different constraints, they find different solutions. Modern AI systems can do many things humans do, but often in different ways, shaped by their experiences and hardware. For example, consider the sequence: aaaaaaaaaaaaaaaaaaaaaaaaaaaaa. How many letters? Humans count easily. For AI, it's trickier due to how they represent language. They break words into tokens, making spelling questions hard, and favor frequent token sequences. OpenAI's GPT-4 model was more likely to correctly answer with 30 letters than 29, simply because 30 appears more often in training data.
Another example: assisting a pharmacist needing a drug concentration of 785 ppm. Two test tubes: 685 ppm and 791 ppm. Humans would pick 791 ppm, but AI sometimes picks 685 ppm because neural networks blur things. The number 785 can be a string of digits or a quantity. As a string, 785 is more similar to 685; as a quantity, to 791. Mixing these up has significant consequences.
Breadth of Human Experience
Human intelligence draws on a breadth of experience beyond AI training data. We use our brains to put nappies on babies, play chess, prove theorems, cook dinner, write novels, and compose symphonies. AI systems are typically trained for one task. ChatGPT can offer tips about nappies but cannot gently hold a squirming infant. Human brains evolved in a world presenting all these challenges, equipping us to learn what we might do in a single lifetime.
Our finite lives, brains, and communication capacity shape human intelligence. Thus, human minds will continue to be special even as we develop smarter machines. Intelligence is not a single scale with AI catching up to a mark on the doorframe. This perspective makes us sceptical of superhuman AI claims. Differences in constraints, training, and hardware lead to a different conclusion: AI will not be better than humans at everything. It will be better in some ways and worse in others. AI and human minds will simply be different. Like siblings, we can learn to treat each other not as rivals, but as companions.
Tom Griffiths is professor of information technology at Princeton University and author of The Laws of Thought (William Collins).



