ai:sight > Volume 2 > Human decision making in the time of growing automation

AUTHOR SPEAK

Human decision making in the time of growing automation

Renowned psychologist Gerd Gigerenzer shares insights from his new book into the continuing importance of human decision-making in a time of growing automation.

Over a long and distinguished career in psychology, Gerd Gigerenzer has investigated how people make decisions when there are limits on information and time. Here, he explains why he continues to believe in the power of human intelligence and what we can learn from our minds about how to best use AI.

Why is it that you believe that the human mind is still able to perform better than artificial intelligence in certain situations?

It’s very important to distinguish between stable situations, where tomorrow is likely to be the same as today and the day before, and unstable ones, which are ill-defined and contain uncertainty. The difference between these two situations is essential for understanding what AI can do for us. It finds success in stable situations, including in games like chess and Go, and industry applications where AI can carry out routines without much human thinking required.

Situations where uncertainty is involved, are a very different proposition. There’s no evidence that deep learning and complex models can do any better in such situations than simple rules that human intelligence relies on, otherwise known as heuristics. In my studies, I’ve found that the answers delivered by simple heuristics can be better than those provided by highly complex models. 

What are the advantages of a decision-making process based on heuristics rather than a purely data-driven, ‘optimizing’ approach?

Firstly, in an uncertain situation, the concept of an optimizing solution is a total illusion. If you try to optimize, you’re just hoping that the future is exactly like the past, which is unlikely. Highly complex models are sensitive to small systematic changes and can easily fall apart. Under uncertainty, you need a robust approach based on smart heuristics rather than optimizing.

An example is the case of Harry Markowitz. He was awarded the Nobel Memorial Prize in Economic Sciences for his work on mean-variance optimization, which concerns solving the question of how to invest a certain sum into assets. However, when he invested his own money for his retirement, he did not use his Nobel Prize-winning optimization model. Instead, he used a simple heuristic known as 1/N, which allocates funds equally across all assets under consideration. In contrast, his highly parameterized model needs to estimate all future means, variances, and covariances from past data. The more you have to estimate, the more estimation error you can expect.

1/N, however, is a heuristic. It doesn’t estimate because it uses zero data. While it may have a bias, you get rid of estimation errors, and studies showed that 1/N could make more money than optimization in the real world. Less can be more.

Could you share another example of a situation where heuristics worked better than a purely data-driven AI?

One such instance is Google’s attempt to predict the spread of the flu between 2007 and 2015. The assumption was that people with symptoms would enter search terms related to their illness. So, it would be possible to find out where the flu is spreading based on the frequency of those searches. Google’s engineers analyzed some 50 million search terms, tested 450 million different algorithms, and developed a secret algorithm that used 45 terms (also kept secret).

However, this approach failed. When there was an outbreak of swine flu in the summer of 2009, the algorithm could not recognize it because historical data had taught it that flu was high in the winter and low in the summer. In response, the engineers continued to make the algorithm more complex, which did not improve results.

In contrast, what does the human brain do if it has to predict something highly volatile? It doesn’t use big data. Instead, it only uses the most recent pieces of information, which are the most reliable ones. We, therefore, used a heuristic in which we only took the most recent data point available for flu-related doctor visits and nothing else and used it to predict next week’s visits. This simple heuristic predicted the flu much better than Google’s algorithm over the eight years Google made predictions. One data point can be better than big data. 

This result shows that looking at how human intelligence deals with volatile situations can be useful and not to think that ignoring some information is always bad. 

Suppose a heuristic approach is more successful in many real-world situations; how can we take those lessons to use AI more effectively?

There are two ways, the first of which is very simple. When you get an offer for an AI application, evaluate whether it would be in a stable or uncertain situation. If it’s an uncertain situation, keep away from current AI applications and solve these problems yourself.

Then there’s also the question of how we can use psychology to make machines smart. Psychological AI is the original vision of AI proposed by Herbert Simon and Alan Newell. Today, many machine learning researchers do not even consider how the brain solves problems. Yet deep learning is not the route to true intelligence because more computing power makes algorithms faster but not smarter – it does not generate intuition and common sense. For instance, children need to see only a single or a few U.S. school buses to recognize all others, while deep artificial networks need thousands of pictures and can still be fooled into believing that a picture that consists only of horizontal yellow stripes also represents a school bus. Deep learning is fundamentally different from human intelligence.

The future lies in using insights from the human brain to integrate present machine learning with psychology AI. We need to get causal thinking, intuitive psychology, and intuitive physics into AI. That’s the way forward.

The concept of an optimizing solution is a total illusion. If you try to optimize, you’re just hoping that the future is exactly like the past.

What do you hope readers of your new book will take from it?

AI has been sold as a super-intelligent assistant that tells us what we should do, encouraging us to lean back and dutifully follow its recommendations. That’s the wrong idea. To realize the possibilities of smart technology, we need to get smarter and understand what it can and can’t do.

For instance, Elon Musk tells us every year that we will have self-driving cars (Level 5) the following year. Level 5 means a car that is able to drive safely under the full range of driving conditions without any human backup. Despite the ongoing marketing hype, no such car exists. Given the unpredictability of human drivers and the difficulty for AI in dealing with uncertainty, I predict that we will not have self-driving cars of this kind. 

We will likely get something much more interesting: Level 4 automation, with cars that can drive without human intervention in restricted areas. That’s a technology already existing and a vision we can apply more widely. Level 4 is interesting because it will change our environment. It requires a stable environment and humans that are more predictable if we are to profit from the limited abilities of AI. And that may eventually mean that we humans will no longer be allowed to drive.

AI is not just a technology that assists us in making our lives more convenient. It changes us, as many technologies have done before. We must adapt to take advantage of it and, at the same time, stay in charge. That’s what I’d like readers to take from the book.

Rate this Article

Tags

How to Stay Smart in a Smart World: Why Human Intelligence Still Beats Algorithms

Read more from Gerd Gigerenzer's new book 'How to Stay Smart in a Smart World: Why Human Intelligence Still Beats Algorithms,’ available at leading booksellers.

Read it here

Author

Gerd

Gerd Gigerenzer

Professor & Psychologist

Gerd Gigerenzer is Director of the Harding Center for Risk Literacy at the University of Potsdam and Director emeritus at the Max Planck Institute for Human Development in Berlin. He is former Professor of Psychology at the University of Chicago and John M. Olin Distinguished Visiting Professor, School of Law at the University of Virginia. He is Member of the Berlin-Brandenburg Academy of Sciences, the German Academy of Sciences and Honorary Member of the American Academy of Arts and Sciences and the American Philosophical Society.

Related Reads