One in five Americans will make 2025 the year they save more moneyand there is a growing trend of seeking advice from generative AI technology on how to achieve personal finance goals. But is using ChatGPT or Gemini to improve your personal balance sheet the right decision? Experts say it’s complicated.
When used as glorified search engines, large language models excel at helping you find information quickly, and they can also be a good source of general advice on how to budget or improve your score credit. However, getting precise answers to specific and sensitive financial questions is where the concerns begin, says Andrew Lo, director of the Financial Engineering Lab at the MIT Sloan School of Management.
“It is very dangerous to seek advice (from AI) of any kind, whether legal, financial or medical,” he said. “These three areas present very great dangers if not done well.”
Many AI platforms lack domain-specific expertise, reliability, and regulatory knowledge, especially when it comes to providing sensitive financial advice. They could even lead people to make unwise investments or financial decisions, Lo warns.
Despite these concerns, many Americans are already turning to AI chatbots to help them manage their finances, and among the 47% who said they have practiced or considered the practice, 96% have a positive experience, according to one study. Experiential study from October 2024.
Platforms like Perplexity and ChatGPT can help people looking for advice on saving and budgeting, investment planning, and improving their credit score. According to Christina Roman, head of consumer education and advocacy at ExperiencedThis technology provides a great starting point for consumers who might not otherwise be able to afford professional financial advice.
“I don’t think it’s going to make people dependent on AI for these types of services, but I think it’s a great tool that can help them start to navigate their financial lives and really understand complex topics like investment. and so on,” says Roman.
While every incentive experience is different, an individual can provide relatively simple details about their financial situation, and generative AI can produce a fairly elaborate plan.
Here is an example of a well-designed AI prompt for personal finance advice:
“I need help managing my money. I make $50,000 a year.
I have $10,000 in debt on one credit card and $2,500 in debt on another credit card.
My monthly rent is $750
My car payment each month is $450
I have $150 in other utility expenses
I only have $250 saved in my emergency fund
Can you help me get on the right track?
Here are six distinct ChatGPT sections provided in response to this prompt:
“1. Budgeting with a 50/30/20 rule (customized for you), including an estimate of monthly income.
2. Distribution of expenses
3. Debt repayment strategy: snowball or avalanche
4. Purpose of the Emergency Fund
5. Budget adjustments
6. Automate payments and savings and sample action plan for next month »
Asking the AI follow-up questions and adding more details about your financial situation and goals is a good practice. This helps the platform understand your unique situation and offer resourceful insights.
However, Roman advises people to be very careful with the outcome. AI platforms are hallucinating, which means the advice they offer may not be based on best practices, or even solid reality when it comes to personal finance.
Additionally, Roman says to be generic about the information they provide to any generative AI platform, as you may not realize how your information will be recorded or used to train the AI model itself. even.
Generative AI platforms are innovating by the minute and there is no doubt that the ChatGPT of 2025 is more accurate and detailed than the version available just a year or two ago. But that’s exactly what has some financial experts worried. Hallucinations may be hidden in plain sight, and people without financial expertise or experience may not know the difference.
In Lo’s recent research paper on using generative AI for financial advicehe cites an example in which ChatGPT 3.5 invented the author names of an article that it used to save its answers. While this may not seem like a serious offense, when it comes to statements involving financial risk, hallucinations can ruin a person’s finances.
AI platforms don’t always disclose as much source or background information as you might want or need. For example, when you ask ChatGPT for investment advice, it recommends investing in companies like Microsoft. Even if human financial advisors do the same, ordinary users may not realize that Microsoft invested more than $13 billion in OpenAI, the parent company of ChatGPT. The chatbot only notices users’ conflict of interest if it is reported.
Here are some other examples where ChatGPT recognizes that its responses were “suboptimal”:
Working with a human financial advisor allows for more conversation-based financial planning. Details about an individual’s financial situation and goals can be discussed in more detail to create a personalized plan that includes an understanding of all risks related to potential money movements. Generative AI will provide financial advice with only minor details, and people who consider this information without thinking about their overall financial situation could make costly mistakes.
As LLMs become more advanced, one can imagine a future in which generative AI is much more integrated into the financial advisory ecosystem.
Michael Donnelly, interim managing director of business growth at the CFP Board, says finance professionals are perhaps more capable than others of making the strong case that technology cannot replace human advice . He engaged in a similar conversation ten years ago, during the rise of robo-advisors.
Financial advisors who learns to accept AI as an ideal tool for things like internal practice management will excel, says Donnelly. Using AI can save advisors time better spent strengthening personal relationships, a hallmark of the financial planning profession.
“Advisors who don’t care about technology or are reluctant to engage, implement and adopt technology, those are the advisors who will be impacted and potentially replaced by AI,” he says.
For consumers, AI won’t eliminate the need for professionals to work with a human financial planner, Donnelly says. But for those who don’t have a dedicated financial advisor, Lo has concerns.
“We don’t yet have a guardrail on the ability of major language models to provide advice to consumers,” says Lo. “And I think on the regulatory side we need to have more careful guardrails, but on the research side it really opens up a whole new set of VISTAs for us to explore.”
Lo equates the situation to the fact that consumers have wide access to low-risk mutual funds or money market accounts, but there are much stricter regulations when it comes to who can manage investments riskier private equity or hedge funds. AI, for the most part, has no guardrails.
Lo suggests a three-pronged approach to making the intersection of AI and finance safer:
Investor education: help investors understand that hallucinations can occur and that any response should be carefully verified.
Integrated guardrails: Create LLMs that can properly detect abuse and misuse.
Regulations: Just as financial product categories are limited to certain audiences, so too should AI.
“There are winners and losers whenever you introduce new technologies, and the early adopters are the ones who are likely to make a lot of mistakes using the technology, but they are also the ones who are much easier and more I am likely to innovate with technology, rather than resist it,” says Lo.