Are You Using AI Like This?

 

By Sam DeLeo

 

Most financial advisors would rightly rather spend time working on relationships with their clients than mired in the daily functions of their offices. The rise of Large Language Models (LLMs) in Artificial Intelligence (AI) makes this more possible than ever before. Models like Google’s Gemini, Anthropic’s Claude, ChatGPT and others now lighten the burden of creating rote documents, communications and projects, as well as quicken their completion.

Formatting, design and composition times shrink dramatically with image-based AIs. We can generate reports and brochures in a fraction of the time it took us before. But there is a whole new level of benefits beyond typical “run and fetch” marketing tasks. Advisors who spend time training their AI model with their specific brand and company data will enjoy a more powerful asset that can transform and streamline the entire way they do business—without sacrificing the principles on which those businesses were built.

Knowing What Your AI Is and Isn’t

Think of Artificial Intelligence as a library. The library building represents the interface model, whether that’s Open AI’s ChatGPT, Google’s Gemini, etc. The librarians working inside the building are AI’s engine, its model. They run around producing countless library references of probabilities for us. They answer every question we ask and perform every content task we request, with one exception: They can’t access any information outside the library. Nor do they truly think or know things—that’s our job.

This metaphor, courtesy of Christopher Penn, an AI keynote speaker and co-founder of the data firm Trust Insights, gives us a conceptual starting point that sheds light on AI’s immense potential to help our businesses, as well as its practical limitations. If we remind ourselves that our engagement with AI revolves around probabilities generated from data that already exists on the internet and other sources—what’s inside the library “walls”—we will better realize our expectations for the tasks we assign to AI. We will understand it more as an aggregator of pre-existing information than a generator of original content.

But AI involves much more than improving individual job tasks. It can exponentially scale the development of an organization as a whole, providing we realize the necessary commitment on our side of the ledger.

“Technology is only as powerful as the time you take to use it effectively,” said Matt Reiner, Managing Partner at Capital Investment Advisors. Penn expresses another way of understanding this commitment on our part: “AI is an amplifier. It turns the good into great, the bad into awful.”

AIs/LLMs carry real and significant risks for businesses and our world, now and in the future. For our purposes, we will be looking at ways in which AI can be optimized by financial advisors to improve their businesses and client services, specifically in the area of marketing.

Invest Time in Training Your AI Model

Financial advisors who have not yet used AI in a significant capacity will first want to decide on their approach to using the tool. Penn recommends a great mindset to start this process: “Approach AI as a hired contractor. Or, think of AI is the world’s smartest interns,” he said. “They have A Ph.D. in everything, but they’re still just interns. You are a subject matter expert, so you need to always review results. You can’t just let the machines do it. They can only serve you as well as you direct them.”

Next, it’s wise for us to narrow focus. Which of the jobs we perform involve impersonalized data or material, routine business processes and transactions, repetition and/or drudgery, anything that basically does not reflect on our story and our brand? These are the tasks that make ideal candidates for AI. The same criteria can help us decide which tasks are not optimal for AI. The good news here is that, unless it causes a large gap in productivity without AI, we can continue to perform the parts of our job we enjoy and offload those we don’t.

“One of the issues we find with AI is that people assume they can use it for everything,” said Nathalie Nahai, author of “Webs of Influence: The Psychology of Online Persuasion.” “They don’t take the time to evaluate the results. We should always be asking, ‘What does AI do well, how does it augment human interaction in this area, and what doesn’t it do well? What are the key challenges you face and how does AI perform in that area?’ I recommend taking a specific time, two weeks, say, and test it, see how AI works in a specific area. And then keep up this testing process as you move it more into your marketing. Don’t just automatically assume it’s the best tool for the job.”

Starting with the release of Chat GPT in late 2022, some financial professionals grew concerned that consumers would come to rely on AI instead of their services. But people have had access to online trading and financial advisory services for years. This fear represents a base-level misinterpretation of the current role AI is capable of serving. The optimistic version is that, used with purpose, AI gives us more time to be better advisors.

“Anything advice-related is your job,” said John Prendergrast, CEO of Blueleaf Wealth, an advisor management platform in Boston. “(That) can occasionally be supported by AI research or data. On the marketing front, remember that AI doesn’t know what a fact is, so you really can’t trust it. The more you are the one making the conclusions, where AI it fills in the details, the better off you’ll likely be.”

Prendergrast sees some advisors using AI for personal and professional branding. This can be a tricky gambit. Unaltered AI content can have common markers that we all, by our repeated exposure, are getting better at recognizing. This issue grows more critical when advisors use AI to create the copy for their websites. Google crawls sites regularly to detect duplicate content and penalizes the sites accordingly in search rankings. For this reason, we will want to avoid using unaltered, AI-generated content for website content.

“It’s great for content details or idea generation,” said Prendergrast. “I just had (AI) turn an article I wrote into carousel format. All I did was say, ‘make into carousel.’ Last year at one of our conferences, we used it to whip out a detailed IPS in 15 minutes. So, when content is just above boiler plate, great, but you still need to edit it. It’s going to make writers more into editors, but that’s where the real writing is and has always been. So, it will raise the level of work for writers, but they still need to be the editors in charge.”

Beneath the issue of duplicated content lies the claim of authenticity. With enough data, AI can write pieces in your voice. But it can’t tell your stories. And why would financial professionals not want to use their personal experiences to connect with prospects and clients? Consumers understand what financial planners do; they want to know who they are and how they behave with their clients before trusting them with their life savings.

“How can AI make me more human, more me,” said Prendergrast. “It can take away all the automated parts of my job and let me be me, and thereby allow clients to reveal themselves to me. AI can give you back the time to do that.”

Executing and Scaling Best Uses

While current tools exist to help advisors with financial services, such as SigFig, PulseFolio, and Datamaran, we are focusing on how advisories can best use AI on their own through direct interaction. The task range we assign to AI can be as varied as our needs, extending beyond the most basic marketing functions to include:

• Data Analysis
• Customer Behavior Analysis
• Client Services
• Market Trendspotting
• Risk Assessment
• Trading Pattern Analysis

Most advisories are using AI in some capacity. But a closer look at how we use it can bring more dramatic results. For instance, if we have never “primed” AI before submitting our prompts to it, we’re skipping a critical initial step to our process. Penn refers to this as “Priming Representation,” but it’s basically creating an outline of instructions that helps your AI use prompts more effectively. Penn stresses that it’s best if we allow the machine, instead of us, to produce this primer outline.

“As with a real library,” he said, “our conceptual AI librarian knows the layout of the library way better than we do, and understands what books are in the library and what aren’t. That means that if we have the language model build our outline, it’ll contain references to known books in the library, metaphorically speaking. We’re better off having the librarian(s) tell us what (they know) about a topic than for us to tell the librarian(s) what they should know.”

Penn recommends these steps for priming your AI to be a better co-pilot for content:

PRIMING REPRESENTATION

1. Prime
• What do we know about this topic?

  • What do we know about its best practices?
    (Asking AI what it knows about a topic allows us to determine how much more data we need in our prompts and outlines. It also saves time by trigger-loading the AI model with relevant data.)

    2. Augment
    • What questions do you have?
    (Asking AI if it has questions fills in knowledge gaps in both our outlines/prompts and the model’s response.)

    3. Refresh
    • What did we forget to ask?
    • What did we overlook regarding this topic?
    (Again, we’re exercising the model’s “librarians” while also accounting for own errors.)

    4. Evaluate
    • Did you fulfill the conditions of the prompt completely?
    (Asking AI to evaluate its work further specifies its responses and should be done after its main response. IMPORTANT: Instead of beginning a new session every time, give your AI the chance to “learn” and correct course while keeping the valuable session history intact.)

    Now we have a set of instructions for our task at hand that we can feed back into the model. Think of treating AI here as if it were an athlete warming up for a performance—with some cardio and stretching, it will perform better and make fewer mistakes.

    Once we’ve primed our AI model and reviewed the responses we received, we can proceed with our prompts. Sticking with the athlete metaphor, here’s an example of an information-gathering prompt.

    “You are an Olympic Gold Medal-winning high jumper. You have set world records in your sport and are a master of the ‘J approach’ to the high jump. What do you know about best practices for mastering the high jump?”

    Here are Penn’s recommended steps for a more complete prompt process:

    1. Role
    In our example, we assigned AI the “role” of an Olympic Gold Medal high jumper. We can further define that role: “You are eight years removed from your last world record, so you will be exploring the latest diet and fitness training for high jumpers to offset your age difference between younger competitors.” In this first prompt step, be sure to include relevant keywords, jargon and phrases for AI.

    2. Action
    Submit an action. “Your first task will be to lay out a training regimen that begins 12 months before your next Olympics competition.” We’ll want to use verbs that enhance this regimen: specify exercise sets and reps, plan a diet for each day, summarize the most important steps to achieve this objective, etc.

    3. Context
    Here we can submit personal information and correct any errors made thus far. “For questions about this regimen, visit ____________.com, email ____________, or call (000) 000-0000. Use hashtags: #OlympicCompetition #HighJumpTraining #DailyHighJumpRegimen, etc.

    4. Execute
    If we have the content and actions we want, we can deploy them where we see fit. We can upload our new high-jump regimen to our website. We could ask the AI model to create a step-by-step video of it for our YouTube channel.

    If the content and actions still need refining, we can submit augmented requests. “Format the high-jump regimen into a one-sheet.”; “Find images of Olympic high jumpers suitable for a website.”; etc.

    Many advisors are scaling AI to reallocate their marketing resources. “In terms of video editing and podcasting, we used to have a team of people do that,” said Reiner, “and now AI has a tool that does all of it—editing, transcription, social media posts, and so on. The people who were doing that for me are now doing more elevated and sophisticated tasks. They’re thinking about how to make the podcast more engaging and dynamic.”

    One use of AI models we should avoid comes in the area of computational data. “They’re bad at math, which means you shouldn’t use them as computation engines,” said Penn. “So, for example, don’t input S&P data and say, ‘Write me a financial forecast.’ Instead, you would input that data and say, ‘Write me some software that can forecast this S&P data.’ ”

    The point of using these steps is to understand our interaction with AI as a developing conversation that incrementally homes in on our goals. In that respect, we will always be well-served to query AI about the most commons errors or assumptions on our part, as well as the general mistakes to avoid in our specified tasks.

    This approach is different than treating AI like a search engine, where many of us get stuck and then settle. A search-engine type of engagement will almost always fall short in maximizing the value of AI. We will get results, but not nearly the complete results we could be getting, because we failed to be thorough in our request process.

    Compliance and Transparency

    The good news about AI regarding financial industry compliance is that it requires no additional procedures. The data we gather from AI gets submitted to compliance officers just like any other information we currently use in our businesses. The advantage comes with using AI as a tool to keep up with any changes in compliance that might affect advisories.

    “You can do this with the new DOL fiduciary regulation, for example,” said Penn. “Input the document into your AI, thoroughly describe your role as a financial advisor, and then ask AI how the regulation will affect your work.”

    We have seen that the best use of AI content is to proofread, edit, source-check it and revise/rewrite it in our voice so we make sure it aligns with our business brand. But another reason for doing this is that 100-percent, AI-produced content cannot be copyrighted, so we have no legal basis to prevent others from using it as their own.

    There are two schools of thought regarding the transparency of using AI in our businesses. One holds that, since the content will be sent through compliance like any other, we have no need to tell anyone how we produce this content. While this assumes we would never enter any of our clients’ personal information (PI) into AI, we should ask ourselves how we would feel if we were in the position of our clients. We should be adopting this perspective with our marketing, anyway, so extending it to the use of PI is even more appropriate. (Note: Models like Llama 3 and Command R can operate offline and thereby might be safer in guarding PI than our current systems, but we should vet these carefully and always keep updated on their terms of use.)

    “If you are embarrassed about anything you are using and don’t want clients to see it,” said Prendergrast, “maybe that’s a hint that you shouldn’t be using it. If you are that worried, use disclaimers where possible, such as: ‘We never put personal information into an AI engine.’ Transparency is always the best policy. Some advisors who use the technology will have a long compliance and regulatory discussion with clients, which is unnecessary, because if you do the right thing, you are already transparent and document your behavior, so you’re fine. Don’t bring up that kind of technical subject matter and language with clients and prospects because it may only wind up raising a red flag with them that is unnecessarily concerning. So, why is it necessary for you to bring it up in this way? Be transparent, not bureaucratic.”

    Sometimes, this process can be as simple as us listing AI as just one more of our content sources. We would not present someone else’s content as our own, and AI should be no different.

    “Sourcing is important no matter what,” said Reiner, “and that includes marketing insights from AI. I don’t think you have to source it any different than you would if you looked something up in Wikipedia.”

    With the power of AI, lead-generation functions might look tempting, but we should never use AI for direct leads. “It can’t get you hot leads,” said Penn, “(but) there are illegal ways to do that if you’re interested in doing some jail time. So no, I would not use it to get you leads.”

    Our Responsibility

    The financial industry is not different than any other when it comes to AI. We succeed by scaling it to simplify and automate tasks, not just handle the individual prompt here and there. This can be a gradual process each advisor and advisory approaches according to their own comfort level. But it is already growing into an industry differentiator, and an advisor who avoids using AI will very soon look like someone who refuses to use email in favor of snail mail.

    Some tasks are so much a part of what we do that it may take time see past our biases and recognize them as jobs appropriate for AI. But over time, odds are we discover more and more jobs to offload, rather than less. Most of the limitations exist in only our perspective, our willingness to push beyond those limits. Sometimes, our efforts will fail or produce uneven results, or even errors when it comes to jobs involving generative language or math. These moments require our participation, whether in source checking and correcting or revising our prompts. We are responsible for what we ask AI to produce, so if the model we are using delivers an error, we can’t pin that on AI. The key takeaway to remember when we encounter difficulties with AI comes via Penn’s coda: “When in doubt, provide more data.”

    The Marketing AI Institute has created an interesting manifesto for using AI. One of its main tenets reads: “We believe that humans remain accountable for all decisions and actions, even when assisted by AI.”

    We can also use AI to gauge sentiment and knowledge about how others see our business. Penn suggests that it’s wise to regularly ask AI the following types of search questions: “Ask it: ‘What do you know about our company? What can you say about our brand and reputation?’ We shouldn’t be using AI as a search engine, but this is an exception, because we need to know what it knows and, therefore, what others may know about our business.”

    We can’t assume that simple browser searches will be the entirety of what a prospect can learn about us. AI will discover and use the branding content we have created for our company. This is why internal brand marketing has to be a part of every advisor’s platform. Perhaps AI can assist with logos, title brainstorming sessions or even some of the company’s story. But with this type of information, the more personal the brand’s narrative, the more it will connect with people. When we want unique and authentic content, we remain our own best sources.

    At the end of the day, what makes us good advisors will make us good AI users: Diligence. Circumspection. Transparency. Ethical concern above and beyond what regulations require. Rigor and care in what we do.

 

Each of us must determine how we address our use of AI to our clients. When we are authentic in our intentions, we are much more likely to make the right choice. AI should never compromise what we owe to our clients or what we owe to each other. With the right processes and clear intentions, we can transform our business and still retain what makes us human and effective advisors.