Experts at Future Proof share how RIAs can safely use AI



Though AI adoption is on the rise in wealth management, standards and guidance aren’t keeping up. Many firms still lack internal guardrails for AI use, and for registered investment advisory firms, that presents serious regulatory risks. 

AI can “be a little terrifying when you think about some of the fails that can happen, that can affect an RIA or any firm,” said Craig Iskowitz, wealthtech whisperer and CEO of consulting firm Ezra Group. 

Moderating a session on “How to Safely Bring AI Into Your RIA Business” at the Future Proof Festival in Huntington Beach, California, Iskowitz shared potential AI missteps: posting a client’s 1040 on a public chatbot (“now it’s stored on someone else’s server”); a hallucinated RMD that slips through in an email (“now it’s an unarchived communication”); a vendor’s AI plug-in is hacked (“then your document vault gets drained”).

In the Tuesday panel, Caitlin Douglas, chief operating officer of RIA investor Elevation Point, and Tom Fields, founder and CEO of advisor platform Fynancial, discussed how RIAs can avoid those and other mistakes to deploy AI “without giving your CCO a heart attack,” as Iskowitz put it.

Key strategies for an RIA that wants to safely implement AI include:

  • Don’t outright ban AI use. 
  • Define what problems need to be solved before looking for AI vendors.
  • Ask AI vendors the right questions for thorough due diligence.
  • Develop an AI compliance manual and regular training protocol. 

READ MORE: How advisors can get noticed in a no-click search world 

Why it’s dangerous to ban AI

Some firms may think the safest way to handle AI is to simply outlaw advisors from using it. But that’s “just not realistic,” Fields said. 

Banning AI outright could keep a firm safe from regulatory risk in the short term, but in the long term, it increases another type of risk, he said: the risk that their business will get left behind when “the RIA down the street is actually implementing safe AI and growing quicker, making their advisors more efficient and giving their clients ultimately better experience.”

And banning AI can actually drive risk via shadow use — “a really common use case that I’m fearful about,” he said. Shadow use is when an advisor uses an unmonitored personal device to consult AI (such as ChatGPT) to arrive at an output, then pastes that back into a work-monitored device before sending it out. 

“There’s so many things wrong with that workflow, but that is currently what some advisors unfortunately are doing,” Fields said. 

Instead, chief compliance officers who recognize advisors’ interest in AI need to figure out a way to safely bring it into the firm. And a large part of that involves thorough due diligence. 

AI due diligence and asking vendors the right questions

Before a firm even reaches out to vendors, it should clearly understand what problem it needs to solve with AI, be it workflows, reporting, risk assessment or something else. Only then should an RIA identify vendors that might solve the particular issue, Douglas said. 

A key part of due diligence is understanding how those vendors access and store data, she added. 

“With AI specifically, I think it’s really important that the teams ask questions around, what are the processes that are in place to not only validate but also test the accuracy of the data,” she said. Vendors should also “be able to demonstrate with full transparency” around documentation and data validation, she said. 

RIAs should also ask about any potential vendor’s cybersecurity, Douglas said: “What are their cybersecurity policies? What type of insurance policies do they even have in place?” 

READ MORE: SEC warns firms to get their AI house in order 

When it comes to generative AI tools, RIAs must understand a model’s “temperature” settings — too high a setting could lead to hallucinations. 

“The lower the temperature, the less creativity that AI is going to provide,” Fields said. If the end goal is producing marketing content, a high temperature setting could be useful, but for performance analytics, for example, the temperature should be low. 

“It’s a question that firms need to ask their vendors,” Iskowitz said. “Every vendor is putting AI into their tools somewhere. You need to ask them, ‘What is the temperature of the models you’re using?'” 

Fields said firms should also ask vendors if they can swap out the large language model (LLM) they’re using so that an RIA doesn’t get “pegged into one specific model” as new and improved ones come out. 

Beware of vendor pitches that are all flash, he added. Vendors should explain at the outset how they’re implementing GRCC — governance, risk, compliance and cybersecurity — with AI. If they don’t, “That’s a red flag for me,” Fields said.

AI policies and training to reduce regulatory risk

There’s a lot on the line, financially, for firms to ensure compliance. In the past five years, Iskowitz said, the SEC has imposed more than $1.5 billion in penalties on broker-dealers and advisors “for failing to preserve business-related communications — and AI output falls under that category.”

Most advisors know that client communications must be archived. But they may not realize that prompts they input to generative AI tools also need to be archived. And if advisors are using unmonitored devices or apps to arrive at outputs, there’s no log to show regulators — a risk for the firm. Partnering with a tech platform that logs and captures those steps is one way to reduce risk, Fields said. 

“If there is an audit, or if there’s some sort of regulatory pressure that comes down, these firms are going to be covered,” he said. 

Updating compliance manuals and firm-wide training policies could also help keep RIAs safe from regulatory risk by letting advisors know what’s acceptable and what’s not. 

“It doesn’t have to be a giant booklet,” Iskowitz said about crafting a compliance manual. “It can just be a one-pager on ‘Here’s the safe AI usage policies for our firm.'” That guidance should be updated regularly, too. 

Firms should also be sure that advisor training aligns with the internal compliance manual — and make sure training happens not only when new employees are onboarded but on a regular cadence or as new strategies are implemented, Douglas said. 

To draft those policies, outside experts can help, especially since AI tools are still so new and experience with them is limited, she said: RIAs “should really rely on CCO and compliance consultants in the industry to help you implement your compliance, your AI policy into your compliance manual to make sure that it’s meeting up to the standards.” 



#Experts #Future #Proof #share #RIAs #safely

Leave a Reply

Your email address will not be published. Required fields are marked *