Evidence of the presence of artificial intelligence (AI) is all around us.
To illustrate, let’s go shopping. Barcodes slipped into checkouts may mark the end of our weekly visit, but they are raw material for AI.
Inventory control programs using big data on weather trends and demand combine with our swipe to put the right replacement products on shelves. At the same time, increasingly sophisticated cameras track purchases and help identify potential thieves. Behind the scenes, AI programs model corporate headquarters strategy, market trends, financial planning and much more.
We are a long way from the friendly local store, where there is a personal response to each visit and each transaction. Ordering, storing, reducing theft and increasing revenue depends on the merchant. It is no wonder that the march of AI is considered dehumanizing, even apocalyptic.
But this seemingly straightforward replacement of the cheerful shopkeeper with a dehumanized AI reflects more of our fears about what is being replaced than reality. AI needs humans to set goals (or program tasks without goals), write the programs, train the data, check data quality, and interpret the results.
“Garbage in” is still “garbage out,” and in a rapidly changing world, human judgment is necessary to keep garbage out and make sense of what the machine is telling us. This includes the responses we get from ChatGPT. Without human judgment, Big Data is just Big Numbers.
Machines:
- I have no consciousness or intentionality;
- Cannot think abstractly or form opinions;
- Are not good at identifying relevance through context (the appropriate comment in one situation or culture that is deeply offensive in another) and they do not “make” sense (think metaphor, irony or with a sense of humor);
- Not having belief or conscience through ethics and spirituality, nor self-confidence through aspiration or ambition;
- You have no emotion or empathy and cannot form relationships or other social bonds involving feelings;
- I cannot anticipate spontaneity, idiosyncrasy, contextual changes, or fallibility; And
- Cannot remedy incompleteness, including confusion between correlation and causation.
It’s quite a list. Beyond storekeeping, every major judgment we must make in our professional and personal lives includes some combination of these. Dealing with colleagues, competitors, climate change or children is the most important.
To define “judgment”, it is the combination of relevant knowledge and experience with personal qualities to make decisions and form an opinion.
We exercise it through the awareness we have, by knowing who and what to trust, by understanding our feelings and beliefs, by the way we make our choices and, in the case of decisions, by being able to deliver what we chose.
So whatever a machine does with AI, it does not exercise judgment: machines are not mechanical human beings. Even the controversial possibility of “artificial general intelligence,” where what the machine can do is equivalent to what the human can do, does not fill these gaps.
These reasons do not mean that humans are better than machines in every situation. On the contrary, the use of machines may well have relative strengths and weaknesses. The relative superiority of the machine comes in certain cases from human weakness. AI offers speed and consistency, neutrality and focus, without being boring, sick, capricious, carried away by greed and fear, or by algorithms having distracting love affairs with each other.
The hypothesis that there is universal substitution is in any case simplistic.
Dr. Eric Topol, in his book Deep Medicine, describes the superiority of AI over humans in certain medical specialties such as radiology, where human fallibility is a problem. AI is getting even better at some aspects of nursing through remote monitoring of patients at home.
But AI cannot provide “the power of detailed and careful observation,” especially with complex psychological support, not only in nursing but in all aspects of medicine. Topol believes the ideal is for humans and machines to work together, with machines allowing humans to do what they do best: talk to patients.
AI is not a zero-sum game in which machines win and humans lose. As Deep Medicine illustrates, it will be the combination of machine and human that will provide quality medical care.
Those who fail to recognize what AI can and cannot offer will be overwhelmed by those who do. But far from diminishing the role of judgment, AI will make even clearer the central contribution of humans in choices combining permutations of unprecedented situations, significant complexity, new variables, abstract thinking, unusual compromises, insufficient data, factors convoluted qualitative statements. , multidimensional risk, idiosyncratic relationships, and personality nuances.
In other words, this is the essence of what senior executives do and why they are paid handsomely to do it.
Sir Andrew Likierman is Professor of Management Practice at the London Business School and former Dean. He has published articles on judgment in leadership, professions and boards of directors.