Humans Are Living a Video Game Ending (We Just Don’t Know Which One)
Johnny Sandquist
Founder & CEO, Three Crowns Copywriting & Marketing
There’s a comforting narrative dominating the discussion around AI in wealth management right now.
Have you seen people repeating these phrases at industry conferences and on LinkedIn?
“AI won’t replace you.”
“It’s just a tool.”
“Robo-advisors didn’t kill us. AI is no different.” (Schwab’s CEO just said that!)

For most people building software today—including your favorite WealthTechs—AI is truly just a tool. They’re using it to help them code and test faster, set up workflow automation, and finetune portfolio engines.
But the conversation is missing some truly big pieces. One piece that no one is talking about is this: Even those who are building with AI don’t control the trajectory of AI.
You’re building your firm on infrastructure created by companies you have no influence over. You’re restructuring teams around it. Designing workflows around it.
And meanwhile, there are only a handful of people in the world making the actual decisions to determine the future of AI: Sam, Mark, Dario, Sundar, and Satya.
You’re playing with toys you didn’t build, and those toys can be taken out of your hands or completely swapped out for different toys at any moment.
The talking points from WealthTech providers right now are simple. Too simple. “We’re not building technology to replace advisors, we’re building technology to support advisors.”
That’s cool. What did you expect them to say? They make their money by selling to advisors. No shade, but these statements are neither revolutionary nor interesting.
But those guys aren’t the ones who will have the final say in this conversation. Neither are you.
So…if you don’t control the trajectory of AI, the next logical question is: where will the industry start to feel pressure first?
The Talent Pipeline Erosion
Even as the industry is saying things like “Robo-advisors did nothing, I’m not worried” there’s still a hidden current beneath the conversation.
In the back of people’s minds, the question of if “AI will replace advisors” is a serious question (even if it’s not yet a serious or immediate scenario).

But what we’re seeing with AI is that it puts pressure from the bottom up, not the top down. Asking if advisors are going to be replaced by AI is asking the wrong question right now.
Instead, AI is advancing to the point where it could theoretically do the job of a paraplanner or an associate advisor, or an investment analyst. You know, the roles that someone works to eventually, one day, work their way up to lead advisor.
If AI increasingly handles tasks like planning prep, scenario modeling, investment analysis, meeting summaries, follow-up workflows — those roles are naturally going to compress.
Short-term, that makes businesses feel way more productive and profitable.
Long-term, though? I’m concerned that it will lead to talent erosion.
Right now, we’re already talking about lacking enough young advisors to replace older retiring advisors. The conversation about a talent shortage is happening already.
So what happens if AI thins out the early career layer of the talent pipeline? The pipeline itself changes.
If those traditional paths to becoming a great advisor get removed, do people simply step into that lead advisor role right away — or does the profession itself become radically reshaped?
On its face, AI in wealth is about automating a few tasks. But the reality is bigger. At some point in the timeline, automation has to remove something or someone. If it’s people, then the downstream effects could add to the talent shortage.
The Human Exceptionalism Assumption
Another comforting belief I see repeated over and over, sometimes multiple times per day from different people on LinkedIn:
“Investment management is commoditized, sure. But people will always crave human connection.”
My response: Yeah…Maybe???
Embedded in that belief is an assumption that human-to-human connection is inherently defensible.
I’d like to think that’s true. But I prefer to look at the way things are and ask “why not?” instead of agreeing. I think it leads you to a more interesting place.
So, let’s consider where things are now:
People already confide in AI. They ask it for advice in everyday situations and to give them advice about their relationships. They use it to partially replace therapy or to process emotions they don’t want to talk about with a friend.
This can go to ridiculous places. Some people have “married” their AI. They’ve fallen in love with it. They’re doing Joaquin Phoenix and Scarlett Johansson cosplay. But for real!
And before you object—yes, I know it’s a tiny subset of the human race doing weird shit like this. I am NOT saying that AI marriage will ever become a big thing.
What I am saying is that people cry over fictional characters. They form attachments to simulated worlds. Emotional authenticity and emotional experience aren’t the same thing.
If AI reaches a point where the emotional simulation feels real enough, the distinction between “authentic” and “synthetic” may matter less than we want to think.
Are we really that far away from Ana de Armas as a holographic girlfriend like in Blade Runner 2049?
And if not, are we that far away from choosing between Bob the Advisor and a holographic Ana de Armas advisor?

Image source: IMDB
I don’t think the importance of human connection is going to disappear by any means. But assuming it can’t be competed with when it comes to professional services is a comfortable take, not an analytical or inherently correct one.
Even if human connection holds, though, we still have to think about what might get commoditized next.
Commoditization Kills the Average, Not the Excellent
While we’re talking about “correct” vs “incorrect” takes, let’s return briefly to the robo-advisor argument.
Robo-advisors did not obliterate investment management, they just commoditized the middle. They automated away generic, model-based planning that allowed average advisors to get by with hardly touching investments at all.
Robo advisors did not destroy the value of excellent, personal investment advice. You know what got more valuable once robo-advisors automated out the middle? Extreme personalization, direct indexing, and specialized strategies.
Those things are now differentiators from the robo-advisors and the hands-off advisors letting their software run investments for them.
Mass commoditization squeezes the average, but the outer edge gets more unique and valuable and strengthens.
It’s possible that the same thing will happen with AI. It’ll weed out the average advisors providing average client experience, and make the ones creating a stellar and unique experience much more valuable.
Side Argument: AI Still Hallucinates and Isn’t Even That Good
Okay, brief pause before I go further.
Let’s address a common objection folks bring up any time someone tries to say something about AI being the future.
“It makes things up. It gets basic math wrong.”
Yes. True.
And guess who else makes shit up and gets math wrong all the time? Humans.
It’s not useful to expect perfection or to try and “gotcha” AI on its (increasingly rare) hallucinations. And furthermore, there’s so much baked into this conversation that never rises to the surface.
What service and model are you using? Is it the free or paid version? What data sources did you connect? What prompt did you start with?
Oh my gosh, there are so many variables. I can’t possibly even begin to cover them all. I’m writing an opinion blog, not a paper for peer review.
What I’m trying to say is that things like this are bumps in the road. They don’t stop progress. Two years ago we were all making fun of AI for how stupid Will Smith looked when eating spaghetti. Now I have no clue if any given video I see online is real or fake.
It’s all following the same forward trend line.

Regulation Will Slow Down AI in Wealth Management
Alright, deep breath time.
Advisors, you do have some sizable insulation from AI disrupting the industry too fast and that barrier of insulation is called regulation.
The SEC (like all regulatory bodies) moves so, so, so slow.
It took them about 100 years to rethink how to let advisors market themselves. I’m not confident they are going to all of a sudden evolve from the tortoise into a hare because AI is here.

No major financial institution wants fully autonomous AI dispensing unchecked advice. Liability is a thing that matters in financial services.
That slows direct disruption, but regulation won’t eliminate structural change that occurs outside of a regulatory advisory practice.
Robo-advisors didn’t come in and start the automated advice revolution. But they did influence asset management pricing and evolve consumer expectations about what a premium digital experience looks and feels like.
I think a trap many are falling into is assuming that AI has to fit into today’s RIA box. Robo-advisors tried to fit themselves into an existing box and disrupt it from the inside. It didn’t truly work.
There are business types and structures we haven’t conceived of yet that may be created to deliver financial services through an AI-first layer.
Consider this: Legacy financial institutions are still trying to figure out APIs. AI is just another buzzword that’s come and gone like cloud, big data, robo. It’s painful to watch. So true disruption and new structures are going to come from outside the industry establishment, not from within existing players.
Those structures will emerge first, and then regulation will respond. Once that happens, we’ll know the direction of the future state of the industry.
Stop Saying This Phrase
Before I close this thing up, here’s one more thought that keeps rattling around in my head—and where I think the real danger lies.
When folks in wealth management say things like “Robos didn’t disrupt us, and therefore AI won’t do any damage either” it sounds eerily similar to another phrase that every Business 101 student is taught to avoid:
“We’ve always done it this way.”
That mindset has ended more industries than any technological leap ever has. Reminds me of this classic Leslie Nielsen gif:

There is no danger in believing in human value, but there is tremendous danger in assuming the current structure is permanent just because.
When I see people get up on stage at industry conferences and tell advisors not to worry because AI is just like robo, I see a lot of naive complacency on display.
There’s a strong correlation between “this won’t change much” and long-term decline.
If you get complacent, you eventually collapse.
Three Possible Endings for AI in Wealth Management
Does this all sound dramatic? Yeah, I guess. The stakes are dramatic.
I’m going to tie this all together with a reference to what is probably my all-time favorite video game series, Mass Effect.
Mass Effect 3 concluded the original trilogy of this sci-fi epic (shhh, the fourth game never happened). Essentially, your character is leading humanity in a seemingly unwinnable war against alien synthetics.

Image source: Wikipedia
At the end of the game, once you’ve beaten the final boss, you must make a choice:
- Destroy the machines.
- Upload your consciousness and control them.
- Merge all synthetics with all of humanity to create a new, hybrid existence.
I think this is useful framing for the path we’re on with AI. Not just in the wealth management or the marketing industries, but society as a whole.
Destroy
This one is unlikely at this point. It doesn’t matter how much water or electricity might be wasted. Society isn’t going to suddenly reject or minimize AI. The possibilities are too dazzling. The obsession has fully taken root.
Control
This is today’s narrative. We’re going to use AI as an assistant, keep the machines subordinate, and keep humanity on top. AI will never get smarter than us. Yay, humans. We’re the best.
This works for now. Will it work 20 years from now? To Be Determined.
Merge
In the third scenario, human + AI becomes the baseline. This is what Elon Musk wants with his stupid Neurolink devices that would create integrated intelligence and a true neural interface.
It sounds like science fiction, right? It couldn’t possibly be true.
When I watched Dick Tracy as a kid, I saw him talk into a wristwatch and desperately wanted it to be real but never thought that kind of technology could exist.
Now everyone and their mom wears an Apple Watch, which does about one million more things than Dick Tracy’s outdated watch ever could dream about doing.
What looks ridiculous in one decade becomes commonplace in the next.
If the future is augmentation, the advisor of 2040 won’t look anything like today’s advisor. Hell, society won’t look anything like today.
As personally terrifying as it is to say, I see this as the most likely path forward, even though I don’t know what that looks like yet.
But be honest, neither do you.
There’s Only One Barrier to Being Replaced
I feel like I’ve just written two-thousand words of pessimism, but I really don’t see it that way. I think I’m being realistic by choosing to think about the “what if” moments instead of pretending that disruption won’t ever happen.
I’m doing this for myself, too. Marketing is changing in incredible ways the last few days.
So, if we want to be realistic and look at what’s next, I think we need to end with something real and not just stay stuck in hypothetical conversations about AI.
Regulation, human connection, and complexity of thought are all current barriers to disruption. But they only offer partial insulation from change.
None of these things are permanent moats in an environment where innovation is constant and AI models are getting better every couple months, not every couple years.
The CEO of Microsoft AI thinks AI will have human-level performance for almost all white-collar tasks within the next 12 to 18 months.
So we’ve got to stop focusing on what we do. The “what” can change at any given second.
The only durable insulation from disruption, at least in how I see it, is yourself.
Your personal philosophy, your worldview, your actual life, your thoughts.
You cannot control the trajectory of AI. The tech tools you use don’t control its future, either.
But what you can control is how clearly you define who you are and what you believe in. As work gets commoditized, that’s the only way to continue to exist.
You’ve got to be so clearly positioned, so necessary in your clients’ lives, that replacing you feels like replacing a perspective. A shared part of themselves. Not just a guy or lady who does work for them.
And that conversation is just getting started.

