Though AI adoption is
AI can “be a little terrifying when you think about some of the fails that can happen, that can affect an RIA or any firm,” said Craig Iskowitz, wealthtech whisperer and CEO of consulting firm Ezra Group.
Moderating a session on “How to Safely Bring AI Into Your RIA Business” at the
In the Tuesday panel, Caitlin Douglas, chief operating officer of RIA investor Elevation Point, and Tom Fields, founder and CEO of advisor platform Fynancial, discussed how RIAs can avoid those and other mistakes to deploy AI “without giving your CCO a heart attack,” as Iskowitz put it.
Key strategies for an RIA that wants to safely implement AI include:
- Don’t outright ban AI use.
- Define what problems need to be solved before looking for AI vendors.
- Ask AI vendors the right questions for thorough due diligence.
- Develop an AI compliance manual and regular training protocol.
READ MORE:
Why it’s dangerous to ban AI
Some firms may think the safest way to handle AI is to simply outlaw advisors from using it. But that’s “just not realistic,” Fields said.
Banning AI outright could keep a firm safe from regulatory risk in the short term, but in the long term, it increases another type of risk, he said: the risk that their business will get left behind when “the RIA down the street is actually implementing safe AI and growing quicker, making their advisors more efficient and giving their clients ultimately better experience.”
And banning AI can actually
“There’s so many things wrong with that workflow, but that is currently what some advisors unfortunately are doing,” Fields said.
Instead, chief compliance officers who recognize advisors’ interest in AI need to figure out a way to safely bring it into the firm. And a large part of that involves thorough due diligence.
AI due diligence and asking vendors the right questions
Before a firm even reaches out to vendors, it should clearly understand what problem it needs to solve with AI, be it workflows, reporting, risk assessment or something else. Only then should an RIA identify vendors that might solve the particular issue, Douglas said.
A key part of due diligence is understanding how those vendors access and store data, she added.
“With AI specifically, I think it’s really important that the teams ask questions around, what are the processes that are in place to not only validate but also test the accuracy of the data,” she said. Vendors should also “be able to demonstrate with full transparency” around documentation and data validation, she said.
RIAs should also
READ MORE:
When it comes to
“The lower the temperature, the less creativity that AI is going to provide,” Fields said. If the end goal is producing marketing content, a high temperature setting could be useful, but for performance analytics, for example, the temperature should be low.
“It’s a question that firms need to ask their vendors,” Iskowitz said. “Every vendor is putting AI into their tools somewhere. You need to ask them, ‘What is the temperature of the models you’re using?'”
Fields said firms should also ask vendors if they can swap out
Beware of vendor pitches that are all flash, he added. Vendors should explain at the outset how they’re implementing GRCC — governance, risk, compliance and cybersecurity — with AI. If they don’t, “That’s a red flag for me,” Fields said.
AI policies and training to reduce regulatory risk
There’s a lot on the line, financially, for firms to ensure compliance. In the past five years, Iskowitz said, the SEC has imposed more than $1.5 billion in penalties on
Most advisors know that client communications must be archived. But they may not realize that prompts they input to generative AI tools also need to be archived. And if advisors are using unmonitored devices or apps to arrive at outputs, there’s no log to show regulators — a risk for the firm. Partnering with a tech platform that logs and captures those steps is one way to reduce risk, Fields said.
“If there is an audit, or if there’s some sort of regulatory pressure that comes down, these firms are going to be covered,” he said.
Updating compliance manuals and firm-wide training policies could also help keep RIAs safe from regulatory risk by letting advisors know what’s acceptable and what’s not.
“It doesn’t have to be a giant booklet,” Iskowitz said about
Firms should also be sure that advisor training aligns with the internal compliance manual — and make sure training happens not only when new employees are onboarded but on a regular cadence or as new strategies are implemented, Douglas said.
To draft those policies, outside experts can help, especially since AI tools are still so new and experience with them is limited, she said: RIAs “should really rely on CCO and compliance consultants in the industry to help you implement your compliance, your AI policy into your compliance manual to make sure that it’s meeting up to the standards.”
#Experts #Future #Proof #share #RIAs #safely