A Wall Street Genius's Final Investment Playbook-Chapter 274 : The Invisible Hand (9)
The AI war I envisioned consists of five total stages.
First, choosing sides.
If you don’t clearly divide enemies and allies, it’s not a war—it’s just chaos.
Second, collision.
Once the lines are drawn, it’s time to clash.
In reality, the AI industry was already split into two major camps: Stark and Gooble, and they were going head-to-head.
Which meant it was now time to enter the third stage.
The so-called “scale-up” phase.
To put it more formally, the “expansion of war.”
Like how the early Cold War, initially just between the U.S. and the Soviet Union, eventually split into NATO vs. Warsaw Pact.
Stage 3 was about dragging in people who had no direct involvement in the conflict.
In other words, I had to turn the Stark vs. Gooble battle from “their fight” into “everyone’s fight.”
To do that, I needed to provoke the bystanders who’d been sitting back and watching—to push them into joining the war themselves.
‘First, maybe we should gather a few more on our side?’
The groundwork was already in place.
Namely, the LLM “MindChat” that Stark and I had released to the world.
The core of this service was its open-source nature.
Anyone could freely download the code and infuse their own ideas to create a unique LLM.
In fact, countless developers were already building their own LLMs based on that open source.
Many of those results were being shared in developer and startup communities like SNS, GitHub, and Product Hunter.
‘These people, if we had to categorize them, lean toward Stark’s LLM camp…’
But objectively speaking, you couldn’t say they were “actively participating.”
They were just contributing out of interest for now.
If things got even slightly risky, they’d likely walk away without a second thought.
So if we wanted to truly bring them into the Stark camp, we needed stronger motivation.
And really, there’s always just one thing that works best in situations like this.
Money.
“How about selling us this idea? We’ll pay you eight million dollars.”
We tracked down the creators of clever LLMs and bought up their code.
With deals generous enough that they were hard to refuse.
The acquired code was then refined by our in-house team and ultimately released as new features for MindChat.
This was one of the standout examples:
<Inner Thoughts Translator for Office Workers! (NEW)>
An LLM tool that “translates” typical workplace phrases into what they actually mean.
Here’s how it worked, for example:
Q: Do you have a moment?
A: =While I explain everything, just nod blankly like you’re listening.
Q: Of course, I’d be happy to help!
A: =Cleaning up other people’s messes is basically my specialty now.
Q: Thank you for the insightful idea.
A: =That was about as illuminating as a dead flashlight.
The feature went viral the moment it launched, sparking explosive sympathy among office workers and spreading rapidly on social media.
–This isn’t translation—it’s a sniper shot.
–HR’s definitely going to ban this tool.
–Office communication now feels like a horror movie...
–Even compliments sound terrifying now. Dammit…
Another popular LLM was <Nonsense History Textbook>.
This tool hilariously twisted real historical events into something ridiculous—but oddly convincing.
For example:
Q: Tell me about the Korean War.
A: The Korean War began in 1950, when tensions between North and South Korea over kimchi recipes reached a boiling point.
North Korea fiercely objected to South Korea’s “illegal overuse of garlic,” while South Korea retaliated, saying, “your fermented fish sauce is the real problem.”
This seasoning dispute quickly escalated into a global culinary crisis, drawing in the U.S., China, and NATO member states.
By 1953, both sides agreed that reconciliation was impossible.
The 38th parallel was designated as a neutral seasoning zone, but seasoning-related tensions still flare up on the peninsula from time to time.
—We should switch to this textbook. I’d study three times harder.
—No idea why someone put this much effort into something so dumb, but... thank you.
—Has anyone seen the French Revolution version?
—The response was highly encouraging.
Countless people were capturing and sharing MindChat’s witty responses, and the most powerful ones turned into memes with matching images, spreading across social media.
MindChat didn’t miss a beat.
They launched new LLM features every Tuesday and Friday, and each time, they dominated the meme trend cycle on social media.
You might think, “What’s the big deal with memes?”...
But this was actually a critical point.
New tech—especially consumer-facing tech—can’t survive without sustained user interest.
Even the best demo has a shelf life of just a few weeks.
No matter how innovative, once something becomes stale, users forget it instantly.
That kind of forgetfulness is fatal for a tech that needs continuous visibility.
But what if updates kept coming regularly?
Anticipation breeds return visits, which directly boosts user retention.
‘Of course, it’s tough for us to make this kind of stuff ourselves…’
Next AI wasn’t built to churn out viral content.
Our internal devs were technical talents focused on advanced computation and model architecture—basically fundamental science.
Asking them to create social media-friendly memes would’ve been a waste of talent and a mismatch of resources.
But what if we just bought ideas from outside?
Then we could keep rolling out fresh features without having to create them ourselves.
So we kept buying new LLMs and integrating them into MindChat.
As this pattern repeated over several weeks, people stopped seeing it as a passing trend.
“MindChat of the Week” became a natural hashtag on social media, and the update days were flooded with screenshots.
It was no longer a trend—it was becoming a culture.
And that culture triggered another shift.
—A friend of a friend sold a solid LLM for five million dollars.
—Quit my job. Gave my company ten years, but I’ll give my everything to LLM now.
–I wrote just three letters in my resignation letter. LLM.
—Still debating whether to sell to Stark or run it myself.
People who had previously seen LLMs as mere entertainment were now viewing them as serious business opportunities.
And it wasn’t just developers.
Many venture capitalists who’d been watching MindChat’s momentum started to sense its real potential.
When they spotted a promising LLM, they contacted the creator immediately and said, “Don’t sell it to Stark. We’ll invest in you—let’s build a startup together.”
As this wave continued, LLM developers and VCs were no longer bystanders in the AI war.
Because now, the success or failure of an LLM could directly decide the fate of their businesses.
They had entered the center of the war before they even realized it—becoming part of Stark’s camp without knowing.
But when one side calls in friends, it’s only natural for the other to gather people too.
While Stark was expanding the board, their opponent Gooble wasn’t sitting idle either.
***
Meanwhile, in the Gooble strategy office meeting room.
The table was littered with tech magazines, industry journals, and analytical reports.
<LLM Craze Sparked by Stark Sweeps Through Silicon Valley>
<Beyond Intelligence to Emotion… The Age of Personality in AI>
All of them were articles praising Stark and LLMs.
Gooble was barely mentioned—maybe a single line for comparison at most.
The media portrayed Stark as the “symbol of innovation,” and Gooble as the quiet elder of the industry following behind.
In other words, Gooble had been given the “They used to be cool” treatment.
“Public attention doesn’t directly convert into revenue,” said one executive cautiously.
Immediately, other voices chimed in from around the room.
“Their business model is laughably crude. A couple bucks per question and some ad revenue... Honestly, I’m not sure if there’s even anything left after server costs.”
“And that kind of popularity doesn’t last. That’s how memes are—fast to rise, fast to fade… typical six-month shelf life content.”
“Exactly. And by the time they roll out a serious revenue model, the public will have already moved on to the next thing. This market doesn’t wait.”
There was a strange satisfaction in the cynical tone of their remarks.
But then someone broke the mood with an unexpected comment.
“We wouldn’t be able to follow that model anyway. LLMs are optimized for MVPs, and we’re not.”
MVP (Minimum Viable Product) means releasing only the bare minimum features and expanding based on market response.
Stark’s LLMs were exactly that kind of structure.
Just swap out the prompt on a general-purpose model and package it as a new feature.
“We can’t do that. Reinforcement learning (RL) requires millions of simulations for training, and the environment has to be designed from scratch. Competing through fast-moving trends like them... is unrealistic.”
Silence briefly fell over the room.
They had been mocking Stark’s model, but the truth was clear.
Gooble simply couldn’t compete in the same fast-paced way Stark did.
So all the Gooble executives' criticisms up to this point were no different from sour grapes.
And yet, everyone in the room already knew that.
As they all glared at the blunt executive, the CEO nodded quietly and finally spoke.
“That’s exactly why we won’t follow them.”
His tone was firm.
“Everyone has their own path. Stark has its strengths, and we have ours. What we need to do is use our weapons properly.”
It was a clear declaration.
They would walk a completely different path than Stark.
Not because they couldn’t—but because they chose not to.
“If Stark offers fleeting excitement, then we offer lasting trust. They generate shallow memes, while we build solid infrastructure.”
With that one statement, Gooble’s internal strategy was set.
“We’ll push a B2B-focused strategy optimized for long-term revenue, not short-term trends.”
Not long after, Gooble announced a series of B2B partnerships.
<Gooble supplies AI logistics optimization solution to FEDPOST... 14% boost in delivery efficiency, 9% labor cost reduction expected>
<Gooble partners with K-Mart to implement AI inventory management... Aiming for 18% improvement in error rate>
Instead of “flashy features,” Gooble chose features that actually worked.
Instead of trendy memes, they focused on hard numbers in Excel sheets.
Gooble was no longer playing solo.
They had gathered allies in their own way and successfully built an independent front.
Thus, the “faction” structure Ha Si-heon had planned was now complete.
On one side: Stark’s LLM camp, driven by startups and developers.
On the other: Gooble’s RL camp, focused on industrial sites and corporations.
Naturally, corporations preferred Gooble’s approach.
—Stark is still in the lab. Gooble’s already in the field.
—We need AI that generates revenue, not ‘likes’ on social media.
Put simply, Stark was the “fun one,” while Gooble was the “gets-things-done” one.
And most companies valued the latter more.
So at first glance, Gooble seemed to have the upper hand…
But that was only if you were looking at numbers.
“In terms of symbolism and buzz, we still lose to Stark.”
The blunt executive once again delivered his brutally honest perspective.
“We’re getting real results, yes, but... 14% delivery efficiency, 9% labor reduction—those numbers don’t feel like innovation. The market sees them as just modest improvements.”
Gooble had performance—but lacked a story.
Simply put, they didn’t have the kind of headlines that grabbed attention.
And that wasn’t their only problem.
Despite mocking Stark’s revenue model as barely covering server costs, Gooble wasn’t doing much better.
“Our operating costs… are no joke.”
Reinforcement learning was inherently expensive.
“If we can scale, maybe we’ll reach break-even... but most client reactions are still... cautious.”
They had secured contracts with some global firms, but most clients were still just observing.
Even those who had adopted the tech were running pilot tests.
And the contracts were effectively discounted paid beta trials.
In other words—
Gooble was burning capital on a system that had neither brand recognition nor guaranteed profitability.
Everyone in the room knew that.
But no one said it aloud.
Except, of course, the blunt executive.
“We’re investing way more than Stark, but getting far less attention... How long can we keep this up?”
Just then—
An unexpected windfall arrived.
<PSI Fund makes major investment in Gooble AI. “Time to bet on executable AI”>
<Wall Street money shifts to Gooble... Votes go to RL, not LLM>
Several leading macro funds on Wall Street had begun pouring money into the Gooble camp.
Their explanation?
Tech supremacy won’t go to the company chasing viral moments—it’ll go to the one designing real market structures and generating sustainable profit.
And right now, that’s Gooble.
They called their investment a “strategic move.”
But in truth, it was nothing of the sort.
All those funds were part of the “Triangle Club,” the macro alliance whose true goal was Ha Si-heon’s sabotage.
Of course, the Gooble leadership had no idea.
So they simply rejoiced at the unexpected good news.
Meanwhile, the Stark leadership frowned at the development.
But in the midst of it all, Ha Si-heon—who understood the inside story better than anyone—just kept smiling.
He quietly hummed to himself as he advanced one chess piece at a time.
‘More money entering the market.’
Because Ha Si-heon’s goal had never been “Stark winning.”
His real goal was just one thing:
To accelerate the development of AI infrastructure and hardware as fast as possible.
And for that, he needed money.
Preferably someone else’s.
From that perspective, everything was going exactly as he wanted.
Venture capital was flowing into the Stark camp.
Corporate partners and macro funds were flooding into Gooble’s camp.
Both factions were burning money for victory—but it was all flowing to the same destination.
Infrastructure, hardware, and compute resources.
But—
This was only the beginning.
‘There’s still a lot more capital I can push into this board.’







