Human decision making in the time of growing automation
Interview with the renowned psychologist Professor Gerd Gigerenzer
Human decision making
in the time of growing automation.
Interview with the renowned psychologist Professor Gerd Gigerenzer
Over a long and distinguished career in psychology, Professor Gerd Gigerenzer has investigated how people make decisions when there are limits on information and time. Here, he explains why he continues to believe in the power of human intelligence and what we can learn from our minds about how to best use AI.
It’s very important to make a distinction between stable situations, where tomorrow is likely to be the same as today and the day before, and unstable ones, which are ill-defined and contain uncertainty. The difference between these two situations is essential for understanding what AI is capable of. It finds success in stable situations, including games like chess and Go, and industry applications where AI can carry out routines without any thinking required.
Situations where uncertainty is involved are a very different proposition. There’s no evidence that deep learning and complex models can do any better in such situations than simple rules which determine your decision, otherwise known as heuristics. In my studies, I’ve found that the answer delivered by a simple heuristic can be better than those provided by highly complex models.
Firstly, in an uncertain situation, the concept of an optimizing solution is a total illusion. If you try to optimize, you’re just hoping that the future is exactly like the past, which is of course unlikely. The highly complex models required are also sensitive to any little changes and can easily fall apart. Under uncertainty, you need a robust approach based on heuristics rather than an optimizing approach.
An example is the case of Harry Markowitz, who was awarded the Nobel Prize in Economic Sciences for his work on mean-variance optimization, which concerns solving the question of how you invest a certain sum into assets. However, when he invested his own money for his retirement, he did not use his Nobel Prize-winning optimization model. Instead, Renowned psychologist Gerd Gigerenzer shares insights into the importance of human decision-making in a time of growing automation he used a simple heuristic known as 1/N, which allocates funds equally across all assets under consideration. That’s because his highly accurate model needs to estimate all the variances from a large set of data, and the more you have to estimate, the more difficult it is to be accurate.
1/N, however, is a heuristic. It doesn’t estimate because it uses zero data. While it may have a bias, you get rid of estimation errors, and several studies show that 1/N makes more money than optimization in the real world.
One such instance is Google’s attempt to predict the rate of flu between 2007 and 2015. They predicted that people who have symptoms would put in search terms related to their illness, and that it would be possible to find where the flu is spreading based on the frequency of those searches. Because that involves looking through millions of search terms, the engineers thought that it required a complex algorithm to predict.
However, this approach failed. When there was an outbreak of swine flu in the summer of 2019, the algorithm was unable to recognise it because historical data had taught it that flu was high in the winter and low in the summer. In response, the engineers continued to make the algorithm more complex, and they continued to receive inaccurate results.
However, what does the human brain do if it has to predict something highly volatile? It doesn’t use big data. Instead, it only uses the most recent pieces of information, which it knows it can rely on. We therefore used a heuristic in which we only took the most recent data available on new related doctor visits and nothing else, then made the prediction for the next week. And this simple heuristic predicted flu much better over the entire period Google tried to monitor using their algorithm. This shows that it is important to look at how people deal with situations and not to fall into the trap of thinking that ignoring some information is always bad.
There are two ways, the first of which is very simple. When you get an offer for an AI application, evaluate whether it would be in a certain or uncertain situation. If it’s an uncertain situation, keep away from current AI applications and solve simple problems yourself.
However, there’s also the question of how we can use psychology to create what I call a psychological AI. This is the original vision of AI, created by Herbert Simon. The future is not using deep learning, which offers computing power and speed but not intuition or common sense. That’s become a game, where engineers try to take a data set and produce increasingly complex models, which creates an intelligence that is very different from that of a human.
We need to change that. The future is in figuring out how to use insights from the human mind and get them into a psychology AI. When making predictions, you have to look at how the human mind would do it, and then try to make sure your AI reasons in that way, intuiting cause instead of correlation. That’s the way forward.
The concept of an optimizing solution is a total illusion. If you try to optimize, you’re just hoping that the future is exactly like the past.
What do you hope that readers of your new book will take from it?
So often, AI is sold as a super intelligent assistant who tells us what we should do, allowing us to just lean back and follow the recommendations we get. That’s the wrong idea. To realise the possibilities of AI, we need to get smarter and understand what we can and can’t do.
For instance, we’re told every year that we will have self-driving cars the next year. To be precise, this means a car with Level 5 automation, meaning that it can operate on its own everywhere and in all circumstances. However, given the unpredictability of human drivers, and the difficulty for AI in predicting within an uncertain situation, I predict that we will not have self-driving cars of this kind.
What’s more interesting is Level 4 automation, with cars that can drive without human intervention in restricted areas. That’s technology that already exists, and a vision that we can apply more widely. We need to change the driving environment to be more predictable if we are to profit from the abilities of AI, and that means that us humans will no longer be allowed to drive.
AI is not just a technology that will help us make things easier. It will change us like many technologies have changed us. We must adapt to take advantage of it, and that’s what I’d like readers to take from the book.
Read more from Prof. Gerd Gigerenzer in his new book ‘How to Stay Smart in a Smart World: Why Human Intelligence Still Beats Algorithms’, available at Amazon and other booksellers. Pick the book here
Other Articles From AiSight #2
Environment, social and governance (ESG) issues were once seen by business leaders as a secondary concern. Now, though, they are a strong magnet for investment, especially among generations who will shortly be making…
At Fractal, we see sustainability and wellness as an integrated whole. That’s why, for the past nine years, we have run programs to enable better educational outcomes for disadvantaged children, empower women to be…
Conversational AI has come a long way since the Eliza chatbot was developed in the 1960s. Today, enterprises in every industry are adopting the technology for an expanding range of use cases. Meanwhile, virtual assistants like…