When it comes to investing and planning your financial future, are you more willing to trust a person or a computer?
This is no longer a hypothetical question.
Major banks and investment companies are using artificial intelligence to make financial forecasts and advise their clients.
Morgan Stanley uses AI to mitigate potential biases of its financial analysts when it comes to stock market forecasts.
And one of the world’s largest investment banks, Goldman Sachs, recently announced that it is testing the use of AI to help write computer code, although the bank declined to specify in which division it was used.
Other companies use AI to predict which stocks might rise or fall.
But do people actually trust these AI advisors with their money?
Our new research examines this question. We’ve found that it really depends on who you are and your prior knowledge of AI and how it works.
Differences in trust
To examine the issue of trust in using AI for investing, we asked 3,600 people in the United States to imagine they were receiving advice about the stock market.
In these imagined scenarios, some people benefited from advice from human experts. Others received advice from AI. And some have received advice from humans working in collaboration with AI.
In general, people were less likely to follow advice if they knew AI was involved in creating it. They seemed to trust human experts more.
But distrust of AI was not universal. Some groups of people were more open to AI advice than others.
For example, women were more likely than men to trust AI advice (by 7.5%). People who knew more about AI were more willing to listen to the advice it provided (by 10.1%). And politics mattered: People who supported the Democratic Party were more open to AI advice than others (by 7.3).
We also found that people were more likely to trust simpler AI methods.
When we told our research participants that the AI used what is called “ordinary least squares” (a basic mathematical technique in which a straight line is used to estimate the relationship between two variables), they were more likely to trust him only when we say so. used “deep learning” (a more complex AI method).
This could be because people tend to trust things they understand. Kind of like how a person might trust a simple calculator more than a complex scientific instrument they’ve never seen before.
Confidence in the future of finance
As AI becomes more common in the financial world, businesses will need to find ways to improve trust levels.
This could involve teaching people more about how AI systems work, making it clear when and how AI is used, and finding the right balance between human experts and AI.
Additionally, we need to adapt how AI advice is presented to different groups of people and show how well AI performs over time compared to human experts.
The future of finance could involve a lot more AI, but only if people learn to trust it. It’s a bit like learning to trust self-driving cars. Technology may be great, but if people don’t feel comfortable using it, it won’t catch on.
Our research shows that building this trust is not just about improving AI. It’s about understanding what people think and feel about AI. It’s about bridging the gap between what AI can do and what people think it can do.
As we move forward, we will need to continue to study how people respond to AI in finance. We will need to find ways to make AI not only a powerful tool, but also a trusted advisor that people feel comfortable relying on to make important financial decisions.
The world of finance is changing rapidly, and AI is playing an important role in this change. But ultimately, it’s always people who decide where to put their money. Understanding how to build trust between humans and AI will be key to shaping the future of finance.
Gertjan Verdickt is a senior lecturer at the University of Auckland Business School, Waipapa Taumata Rau.
The article first appeared on The conversation