A Wall Street Genius's Final Investment Playbook-Chapter 280: The Invisible Hand (15)

If audio player doesn't work, press Reset or reload the page.
Chapter 280: The Invisible Hand (15)

Even among bubbles, some are more robust than others. And in that sense, what I wanted was a bubble that would never burst.

But was the one I had created truly at that level?

‘No way. There’s still a long road ahead.’

Right now, stock prices were soaring like crazy, and investors were riding the wave. But what if even a small piece of bad news dropped? Panic selling would cascade, and this fragile bubble might pop in the blink of an eye.

So what I truly needed was a bubble so sturdy, it couldn’t even be compared to the usual flimsy kind.

Then how could I make this bubble even more durable?

The answer was simple:

"It needs to pass a stress test."

I needed to publicly demonstrate this bubble’s durability in front of investors. If it looked like a bubble but survived jabs, pressure, and violent shaking?

Then it wouldn’t matter whether it was a bubble or just looked like one. From an investor’s perspective, they'd be reassured that:

"It won’t collapse over just any little bad news."

But of course, every test requires an examiner. And who better to validate the strength of this bubble than a fierce adversary?

Only by withstanding pressure from those who oppose me could the bubble’s durability gain real credibility.

So I carefully chose my inspectors. None other than...

The macro fund managers invested in Gooble.

‘I’ve been provoking them all this time for exactly this moment.’

This time, I didn’t even have to roll the dice. The opponent struck first.

The first to draw the sword from the macro side was Gideon Horton. He appeared on CNBC and launched a direct attack on my AI ETF, AFII.

“This isn’t a healthy rise. Most of the stocks in the ETF are small- to mid-caps and can’t absorb the influx of funds. This rally isn’t driven by fundamentals or earnings—it’s driven by speculation. Yes, I’d call it a bubble.”

Then came the warning:

“Structurally, this could be more dangerous than the dot-com bubble. Blind investor faith is now combined with automated buy systems. This combination creates a dangerous positive feedback loop.”

On screen, he displayed charts illustrating the danger: ETF demand → Buy underlying assets → Asset prices ↑ → ETF return ↑ → More demand. The market’s psychology drove the price higher, and the rising price further fueled market psychology—a self-reinforcing feedback loop.

“All bubbles burst eventually. Right now, it might look profitable, but if you can’t exit at the right time, you’ll lose everything. And as always, the last ones in suffer the most.”

This was my first durability test. So I went on air to confront Horton head-on.

“A bubble? I strongly disagree. Yes, the inflow has created short-term overheating. But to dismiss this purely as speculation misses the point. AFII’s rally reflects a structural shift AI is bringing to the real economy.”

“AI… can we really call that the real economy? Think about 3D TVs or VR glasses. They were once hailed as ‘next-gen tech’ but vanished from the market. LLMs might go the same way. There's not even a clear monetization model yet—”

“Yes, I agree with you on that.”

“…What?”

Horton’s eyes flickered in surprise. He had expected me to tout AI’s limitless potential and dazzle with future profit models. But I had no such intention.

“When you don’t know the answer, it’s better to just say you don’t.”

Even near the end of my last life, monetizing AI and achieving broad adoption remained an unsolved puzzle.

“So… you admit that you can’t be sure AI will take hold, and you don’t know the revenue model?”

Horton’s tone betrayed disbelief. In any test, someone answering “I don’t know” is usually marked wrong. But here I was, confidently submitting that “wrong” answer—on live television.

And then, I looked straight into the camera and said:

“Yes, I don’t know. But this kind of uncertainty isn’t unique to AI. There’s no such thing as a 100% certain investment in this world.”

I shifted the debate:

“Is AI the only uncertain bet? Isn’t all investing like that anyway?”

Why respond this way?

“Because this turns it into a relative evaluation.”

In relative grading, you don’t need the right answer. You just need to score higher than the person next to you. It doesn’t matter if your answer is perfect—only if it's better than theirs.

And the first comparison target I chose was...

“For instance, Mr. Horton, I understand your fund is invested in Brexit. That’s also a highly uncertain gamble, isn’t it?”

Yes, Brexit—the shock of the UK leaving the EU. The possibility had long been floated, but most of the market—including Wall Street—had dismissed it as a theoretical scenario. They believed the UK would ultimately choose the “rational path.”

Horton thought so too. He bet heavily on a stronger pound, assuming the UK would remain in the EU.

“Isn’t that investment also based on an uncertain future? The direction might be different, but I think the essence is the same.”

But when I framed both Brexit and AI as ventures into the unknown, Horton scowled deeply and countered sharply:

“No. They’re fundamentally different. Brexit had political uncertainty, yes—but there was still data to make predictions: treaties, trade deals, currency correlations… With those analyzable foundations, you can make informed forecasts.”

I smiled lightly and asked:

“Does the presence of data eliminate risk? What I’m saying is, AI lacks any data. You can’t compare Brexit—where participants’ motives and market patterns are documented—with blindly assuming some tech will be adopted.”

“So… did you secretly hold a referendum in the UK?”

“……What?”

“Because if not, it seems to me your belief that the UK would remain in the EU was also just an assumption.”

I kept guiding him into the “you’re just like me” frame. And I wrapped up with:

“So, you’re saying past data makes an investment safer? Well, we’ll find out soon enough. The referendum is only days away.”

A few days later. That result arrived.

<[Breaking] Brexit Passes – UK Votes to Leave the EU>

<Markets Reel… Pound Hits 30-Year Low>

<EU Financial Stocks Plummet… Volatility Spikes>

Later, seeing Horton sweating on TV, struggling to justify his calls, was… quite something. I even sent him a light-hearted message:

— “If I’d known, we should’ve made a bet. What a pity.”

— “But don’t take it too hard. I hear it’s a rough time to be predicting markets.”

He didn’t reply. But that didn’t matter. The point was this: I passed my first stress test.

Round 1: AI vs. Brexit.

Winner: AI.

But that was only the beginning. No sooner had Horton lost than another macro fund stepped up, continuing the relay of attacks.

This time, it wasn’t about uncertainty or lack of data. It was about timing.

“The issue with AI isn’t the tech itself—it’s when it’s being launched. MindChat isn’t ready for commercialization. It’s being rushed out, not for completion, but to ride the hype before interest fades. A product should be tested thoroughly before release. Otherwise, the risks fall not on the developers, but on the investors.”

To be fair, that wasn’t entirely wrong. So I responded calmly:

“In tech, very few products hit the market in a finished state. Even the iPhone launched as an imperfect device and rapidly evolved through user feedback.”

“That’s totally different from AI. The iPhone built on existing tech. AI is creating something from nothing—the underlying foundations are completely new, which makes the risks far greater.”

So I shrugged and said:

“Maybe. But just because something follows from existing tech doesn’t mean it’s safer. Personally, I’d say smartphones are more likely to fail than AI.”

“Fail? What, like a stuck button? That’s not even remotely comparable to the dangers AI could pose.”

“Sure, but it’s not impossible to get hurt using Apple or Samsong products, is it?”

“This is ridiculous. I came here to talk about the overheating of AI, not engage in wild hypotheticals.”

And yet—soon after:

<[Breaking] Samsong Announces Full Smartphone Recall>

<Multiple Battery Explosions Reported Due to Overheating>

I sent that commentator a message, filled with genuine concern.

— “You’re not a Samsong user, are you? Just remembered how passionately you defended them on-air—couldn’t help but worry. I guess my imagination runs a little too wild…”

No reply, of course. But that didn’t matter.

Third Stress Test: AI vs. Financial Sector

Winner again: AI.

But before I could even savor the win, a new line of attack emerged.

“The greatest weakness of AI is the lack of an ecosystem. No industry thrives on technology alone. Supply chains, regulations, distribution networks, government policy—you need that whole infrastructure to weather external shocks. And right now, AI has none of that.”

This time, the critique focused on AI’s “ecosystem-less sprint.” I nodded.

“True. But no industry starts with an ecosystem in place.”

Pause. Then:

“And ecosystems don’t guarantee safety, either. Take the financial crisis—the most intricately linked ecosystem in the world became the very thing that amplified systemic risk.”

So I chose the financial industry as my third comparison point. I figured no sane person would ever argue that AI is more dangerous than the global financial system. After all, the scars of 2008 still linger across markets worldwide.

But then:

“Yes, the financial crisis was severe. But since then, we’ve implemented countless regulations and safeguards. It’s now a more stable system. Unlike AI, which still lacks any defined shape—I’d say AI carries greater risk today.”

“……?”

Now this macro investor was being particularly shameless.

“So you’re saying that a past offender is safer—because they’ve paid their dues? Is that it?”

“Just because something’s never failed before doesn’t make it safe. It could just mean we haven’t caught the flaws yet.”

“Ah. Like a restaurant that once caused food poisoning is now safer because it’s been inspected—while the new place next door might give us trouble because it hasn’t been tested yet?”

Communication was breaking down. Still, the outcome came quickly enough:

<[Breaking] U.S. DOJ Slaps $14 Billion Fine on Deutsche Bank>

<Subprime-related damages... CDS Surge, Stock Down 8.4%>

Turns out Deutsche Bank—despite claiming it had cleaned up post-crisis—was still exposed to toxic assets. They just got caught because of a U.S. legal probe. The classic tale of an ex-con hiding another skeleton in the closet.

So, I sent a message—out of courtesy:

— “Thought you might find this useful: A list of NYC restaurants with past food poisoning cases. Stay healthy out there.”

Three stress tests. Three macro funds down. Three wins for AI.

I took a quiet moment to reflect.

“That should be enough to prove our stability.”

Of course, winning these televised sparring matches didn’t mean AI’s structural problems were solved. We still had no clear commercialization models. No fully polished products or services on the market.

But that was never the point. The entire plan was to shift the exam into relative grading. Not to prove AI was flawless—but to show it was no more flawed than everything else investors still believed in.

And the result? Even when compared to governments, renowned tech companies, and global banks, AI was proven to be no more dangerous.

‘This should be proof enough—the bubble is stable.’

The market reaction wasn't bad at all. In fact, with each stress test completed, AFII quietly kept climbing upward.

“Now... it's time for the final phase.”

At last, the moment had come to wrap up this long, long AI campaign. The “endgame” of this grand war had arrived.

And from here on out, my objective was singular:

“Make sure this bubble never bursts.”

Up until now, I had been the one inflating, shielding, and reinforcing this structure. But that couldn’t go on forever. Eventually, I had to make this bubble self-sustaining—a system that wouldn’t collapse even if I stepped away.

And for that...

I needed an investor. The kind who wouldn’t panic-sell when adversity struck. Who’d never pull out, even if performance started to stall.

So—who fits that bill? Simple.

“The government.”

***

RECENTLY UPDATES