The Coming Wave: Technology, Power, and the Twenty-first Century's Greatest Dilemma
Last month, I reviewed a book called AI Snake Oil, published in September 2024, which discusses the hype surrounding AI and argues that the distrust and fear expressed by several prominent figures and scientists are unwarranted. This book, The Coming Wave, was written a year earlier, in September 2023. After exploring both sides of the argument—one dismissing AI as unlikely to become all-consuming, more intelligent, or more capable than humans, and the other outlining various use cases where things could spiral out of control—I find both perspectives valid. It was also fun reading it in reverse order.
While AI Snake Oil focuses on GenAI, Predictive AI, and Content Moderation, The Coming Wave delves deeper into Artificial General Intelligence (AGI), Artificial Capable Intelligence (ACI), and Biotechnology.
The book is divided into three parts. The first discusses the technical capabilities of these technologies, the second explores their ethical consequences, and the third proposes steps we should take to remain cautious.
Some scenarios presented seem exaggerated, almost unrealistic, and at times, it feels as though human capability is underestimated. However, the author reminds us repeatedly that humanity has historically underestimated the impact of major technological waves—be it agriculture, the industrial revolution, or the internet.
Still, I can’t help but smile at statements like: “What happens when a human mind has instantaneous access to computation and information on the scale of the internet and the cloud?”
There is a physical limit to how much we can process and understand. How much do we truly retain from the endless reels and short videos we consume? It’s incredible that information is available at our fingertips, but there will always be a gap between how fast we can access it and how deeply we comprehend it.
Even in industry, automation is not always the answer. There are many tasks that humans can perform faster and more efficiently. There will always be limitations, and not all jobs can simply be replaced.
What we should really be concerned about is our increasing dependence. Initially, new technology is met with skepticism, but as it improves, we trust it more, integrate it into our daily lives, and become increasingly reliant on it. ChatGPT and similar models are continuously evolving.
One aspect the author doesn’t discuss enough is the environmental impact of widespread AI usage. I recently read that a Google search consumes 0.3Wh of energy, while a ChatGPT query requires 3Wh. Are people even aware of this? And if they are, is it a price they are willing to pay for convenience?
The author also highlights the issue of open-source accessibility. There is no universal regulatory consensus. Industries like aviation and automotive manufacturing have strict safety standards, yet similar considerations are lacking in AI and the broader internet.
Regulation is necessary, but it won’t be easy. Every nation is racing to be at the forefront of this technology. What will be the price we ultimately pay?
The author suggests ten steps to mitigate risks, one of which is limiting resources. But what has this led to? DeepSeek? Those with fewer resources often find creative workarounds and innovative solutions.
With the release of models like DeepSeek and powerful AI technology becoming widely available—and at a low cost—serious risks could arise.
Very thought-provoking.
© 2025 Sindhuja Cheema Enzinger. All Rights Reserved.