Tero Vesalainen/Shutterstock
Generative AI, including platforms like ChatGPT, is industries in transformation by making processes simpler, more efficient and easier to interact with. However, in the heavily regulated In the financial services industry, the benefits also come with serious risks. It is therefore essential that this emerging technology is used responsibly in order to maintain stability and confidence.
Financial services are no strangers to technological advancements, but generative AI presents a unexplored and complex landscape for the industry. Information about its potential often comes from consultation reports Or academic opinion articleswhich tend to be speculative and lack real data.
This gap in understanding inspired my recent research. Through interviews with bank executives and industry experts, I explored the challenges and opportunities associated with integrating generative AI into financial services. My research also focuses on how this transformative technology is reshaping the consumer experience.
Generative AI goes far beyond using ChatGPT to produce text or DALL-E 3 to create images. It can be used to analyze a consumer’s financial history and behavior to tailor products such as loans, investment plans or insurance policies. And generative AI can also be used to make quick decisions regarding loan applications.
A consumer may be accustomed to using their the bank chatbot to obtain information about a product. Erica, Bank of America’s virtual financial assistant, has facilitated over two billion interactions with 42 million customers. (On average, Erica processes two million requests every day.)
But we don’t yet know who is responsible for the advice and products offered by generative AI. Does the responsibility lie with bank directors, executives, or the AI itself?
For example, if a consumer relies on generative AI for financial advice, there is no guarantee that this advice is credible and appropriate. Critics cite bias and lack of nuanced understanding and judgment.
However, it is considered a valuable tool “second pair of eyes” for wealth managers, with the potential to evolve into a reliable tool also for individual investors.
AI can make wealth management more accessible and efficient. Robo-investing platforms use AI to create personalized investment strategies, managing portfolios based on goals and risk tolerance. This approach reduces costs and provides 24/7 portfolio monitoring without requiring direct human oversight.
But given the financial industry’s high stakes and stringent demands for accuracy, AI tools must be both reliable and precise. Yet the question remains: can this level of trust and assurance ever be fully guaranteed?
Personalization is fast becoming the cornerstone of financial services. My earliest research looked at how AI tools are used to create tailored marketing emails and advertising campaigns.
As banks are now able to access diverse customer data sets and harness the creative power of AI, the potential for personalized advertisements and personalized financial products is immense. But at the same time, the balance between relevance and privacy is becoming increasingly delicate.
And it’s not just banks and institutions using AI legitimately. Generative AI can produce misleading or even fictitious advertisements, potentially ushering in an era of deep fakes which deceive consumers or leave them doubting reality.
As this landscape evolves, consumers must remain vigilant and critically evaluate marketing messages. AI may make scams appear more sophisticated, but the usual protection methods, like checking that messages come from official websites, emails, or verified accounts, still apply.
Be wary of emergency “act now” tactics, poor grammar, or altered URLs (like, for example, “paypa1.com” – with the number 1 – instead of “paypal.com”). Just because an online ad has your name on it doesn’t mean it’s for you. It could have been generated by AI to convince you to click – with potentially disastrous consequences.
Generative AI is here to stay in every sphere of our lives. This is a new landscape for consumers, so it is essential that they monitor how they interact with ads, tools and technology. Even though financial services are well regulated, consumers need to ensure they only use genuine tools provided by their bank.
And while ChatGPT may offer advice, its developer OpenAI will take no responsibility for any recommendations it makes. If you want to use AI, it is far better to hire the chatbot provided by your bank. This way you can be sure that you are getting information from a credible source.
The regulated space for financial service providers, including their use of chatbots, places a responsibility on banks to meet their legal and compliance obligations. This ensures that they protect consumers, provide accurate and reliable information, and comply with industry standards and regulations. The same cannot always be said for generative AI more generally.
Regulators also have a role to play in reassuring and educating consumers about emerging trends in generative AI. THE Financial Conduct Authority and the Advertising Standards Authority must ensure that flexible frameworks are in place that can keep pace with rapid advances in AI technology.
This will involve creating clear guidelines for the development, use and monitoring of generative AI systems, balancing innovation and consumer protection.
The generative AI genie will not go back in the bottle. It will continue to be an integral part of daily life. Consumers therefore need to be proactive and engage with this new, rapidly evolving technology.
This article is republished from The conversation under Creative Commons license. Read the original article.
Emmanuel Mogaji does not work for, consult, own shares in, or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond his academic appointment.