The use of AI in finance was a hot topic at this year’s Sibos conference, as banks consider how AI can transform the way financial services are delivered and consumed and, more urgently, the way their data is managed.
Using Large Language Models (LLM) to increase efficiency, improve customer service, and improve decision-making has been part of the discussion since OpenAI launched ChatGPT in November 2022.
While LLMs represent a significant advancement in AI capabilities, particularly in how machines understand and interact with human language, banks are reacting cautiously due to concerns about regulatory compliance, privacy and security. data security, model accuracy and reliability, bias and fairness. .
“Banks have been inundated with data forever, but there is no prioritization and the tagging is incomplete or inconsistent,” says Andy Schmidt, vice president and global head of banking at CGI. “To be able to just train a large language model to find the data, you have to have enough confidence in the data that it’s usable enough.”
“I think the important part that people need to sort through first is really sorting through the data. Define your data governance and ensure the data is of decent quality. Being able to de-duplicate it and then figure out where you need to enrich it,” he says.
Standard Chartered offers AI-powered solutions and Margaret Harwood Jones, global head of finance and securities, says the bank has worked hard to solve data management challenges. “You get so many instruction requests that come in in a very unstructured format, so we use AI to transform them into structured data formats that we can then process efficiently.”
At a Women in Tech Sibos event hosted by EY, panelists discussed how the only way to avoid bias in AI is to train LLMs to represent everyone from their inception, not just white men, and that the only way to do that is to employ more people. staff diversity.
IBM believes that organizations must proactively detect and mitigate risks; monitoring for fairness, bias and drift. Updates to Granite Guardian 3.0, a network monitoring tool widely used in North America by Granite Telecommunications, allow developers to implement security guardrails by checking user prompts and LLM responses. This includes checking for things like social bias, hate, toxicity, profanity, violence, and jailbreaking on 10 of the biggest LLMs.
Due to the potential risks and ethical implications, banks need to take responsible AI seriously, which means taking a rigorous approach to their data.