Generative AI Implementation in Financial Services: Current Trends and Challenges
Despite the enthusiastic rhetoric from AI providers and industry leaders, financial service companies are still in the early stages of adopting generative and agentic artificial intelligence (AI). A recent round table discussion held by the Securities and Exchange Commission highlighted these ongoing challenges and the perceived slow pace of technology deployment compared to the tech sector.
Unlocking Efficiency with Generative AI
Generative AI holds the promise of significantly enhancing back-office efficiencies within financial institutions. Industries such as operations, compliance, human resources, and client-facing services like wealth management can benefit from AI-driven solutions. Sarah Hammer, Executive Director at the Wharton School, emphasizes that while the potential advantages are acknowledged, many companies are still in the exploration phase.
Slow Adoption Rates
As noted by Hardeep Walia, Managing Director and AI Chief at Charles Schwab, financial services are adopting AI at a much slower rate compared to companies that develop this technology. “We are all in these first rounds,” Walia stated, pointing out that firms are currently focused on evaluating the ROI of potential AI implementations.
Continued Experimentation and Human Oversight
During the discussions, Walia further clarified that while many practical use cases for AI exist, most still require a human in the loop. This highlights that even though generative AI can streamline certain processes, human expertise remains crucial in ensuring quality and precision.
Cost Considerations and Open Source Models
Hammer acknowledged the high costs associated with implementing generative AI, particularly in unproductive processes. However, other panelists noted that the emergence of open-source AI models, such as Deepseek, has helped to mitigate these expenses, making it easier for organizations to explore AI capabilities without excessive financial burdens.
Addressing Last-Mile Challenges
According to Peter Slattery, an MIT researcher with FutureTech, generative AI faces significant “last-mile” challenges. While large language models can achieve 90% of a human’s performance and accuracy, a significant leap in technology is required to fully automate tasks that currently depend on human intervention. This raises questions about the feasibility of fully automating processes with AI in the near future.
Navigating New Risks in AI Adoption
As companies integrate AI into their operations, they face new risks that traditional risk management frameworks may not adequately address. Slattery pointed out that the evolving landscape of AI necessitates a rethink of accountability and responsibility, especially when AI agents interact with one another. His research has introduced new risk categories related to multi-agent responsibilities, emphasizing the urgency for comprehensive risk management strategies.
The Importance of Governance and Collaboration
With the rapid advancement of AI technologies, robust governance policies are essential. Hammer stressed the need for organizations to adhere to responsible AI practices amid growing regulatory scrutiny, including initiatives like the AI Act. Tyler Derr, CTO of Broadridge, echoed the sentiment that maintaining dynamic risk policies is crucial to adapting to emerging use cases, advocating for collaborative efforts across the industry to enhance cybersecurity and risk mitigation strategies.