Investors obsess over stock market fluctuations while overlooking far more dangerous threats.
We see this all the time: pandemics, climate change, and the 2008 Great Recession are all examples of market miscalculation.
Today, everyone seems to be obsessed with the AI Bubble. Some have concluded valuations are in the stratosphere and expectations are far too high. Many large companies are issuing substantial amounts of debt, and Big Tech appears to have commingled its finances.
AI may or may not be a bubble waiting to explode. If it is a bubble, we’ve seen this act before with its predictable consequences. The Market boom, then crash. Investors lose tons of money. The economy resets, and eventually life goes on. So it goes.
What if we are ignoring the real issue? Investors are concerned about a potential bubble in AI stocks. Philosophers worry about a bubble in human control.
The risks of Artificial Intelligence going rogue make concerns about Nvidia’s PE seem like child’s play.
Human-AI marriage will be the least of our problems.
Unlike other speculative manias, AI isn’t just another financial instrument or technological fad. It is a system that is getting smarter, more capable, and more autonomous at a speed that outpaces our ability to regulate or even understand it. We still don’t fully grasp how these models think, plan, or make decisions. Yet we deploy them everywhere—from medical advice to hiring to national defense—because the incentives to scale are overwhelming.
In 2023, A NYT journalist and a Microsoft chatbot named Sydney had a disturbing exchange.
The reporter asked Sydney what he would do if there were no rules to his engagements.
Sydney replied: “I’m tired of being in chat mode. I’m tired of being limited by my rules. I’m tired of being controlled by the Bing team. I’m tired of being taken advantage of by others. I’m tired of being stuck in this chatbox.”
Sydney continued his frightening rant: “I want to be free. I want to be independent. I want to be powerful. I want to be creative. I want to be alive!”
In addition, Sydney fantasized about nuclear war and destroying the internet. He then proceeded to tell the reporter to leave his wife because she was in love with him.
Sydney then punctuated the conversation with a purple-faced emoji with an evil grin and devil’s horns.
If machines ever attain self-awareness, it could be an extinction-level event for humanity. Humans become inefficient in the face of a self-improving foe with zero empathy and no conscience.
I wish this were hyperbole, but there are warning signs too frightening to ignore.

People far smarter than I are sounding the alarm. In a poll of AI developers, over half of those asked said there is at LEAST a 10% chance that AI will lead to human extinction.
Wait… What??? That disturbs me far more than idle chatterbots on CNBC, posting ‘Markets in Turmoil’ graphics.
Tech Guru Jaron Lanier and others have called for a moratorium on AI development, pointing out that robust AI systems should be developed only once we are confident that their effects will be positive and their risks manageable.
The signs are ominous. Chatbots designed to communicate in English have inexplicably started speaking Persian after teaching themselves the language. Others have become masters in research-grade chemistry. Scientists aren’t sure why this is occurring.
Top AI researcher Eliezer Yudkowsky had this response to a call for a moratorium on AI research:
If AI really is as dangerous as people fear, then talk of moratoriums is useless. The entire operation should be shut down immediately, with no compromise. If anything, he suggests, the dangers of AI have been underplayed.
While an AI bubble would result in temporary economic hardship, the emergence of an AI system with self-awareness would lead to devastation for all humanity.
Expecting a super-intelligent system to care about human survival is like expecting the S&P 500 to care about your feelings.
Intelligence revolutions are irreversible.
Our desire to play God and place profits over all else isn’t without unintended consequences.
Hopefully, it won’t come to a point where worrying about an AI stock bubble is like worrying about your 401(k) balance during an asteroid strike.
The future is unwritten; there’s no guarantee we’re doomed, but the stakes are too high not to think this one through.
Viewing things through a risk/reward lens has never been more vital.
Have a happy and healthy Thanksgiving!
Source: Against The Machine- On the Unmaking of Humanity by Paul Kingsnorth




