AI Snake Oil - What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference
In the introduction, the authors present a witty definition of AI “AI is whatever hasn’t been done yet.” This sets the tone of this book, highlighting the existing hype around AI and the misconceptions that exist. Though the definition of AI is still not universally agreed upon, the authors pose three questions to qualify a product or solution as AI, serving as a starting point for this book:
- Does the task require creative effort or training for a human to perform?
- Was the behavior of the system directly specified in code by developers, or did it emerge, say, by learning from examples or searching through a database?
- Does the system make decisions more or less autonomously and possess some degree of flexibility and adaptability to the environment?
The book concentrates primarily on three types of AI: Predictive AI, Generative AI, and Content Moderation AI. So many companies slap the AI label on their products, even when it might not truly apply. Also, the hype around AI taking over jobs or being able to predict the future is far-fetched. With the advent of tools like ChatGPT, AI seems to be everywhere. However, this suddenness is only an illusion. The tech behind ChatGPT has been around for 80 years. It has certainly pushed innovation, but it’s not a single breakthrough that can explain all of the advancements. The perceived suddenness also creates uncertainties and confusion, and many companies are capitalizing on this.
Predictive AI promises more than it can deliver—crime prediction or predictions in health or social outcomes involve too many variables to produce accurate results. Over-automation and undervaluing human judgment can have dire consequences. The example with the hiring tool was comical: applicants inserted white text with elite university names or used other tactics in the videos they submitted to trick the algorithms into shortlisting them.
Generative AI is remarkable, but one must remember that it merely parrots information that was made available to it. The text it produces is based on what someone else has already written. It pieces together ideas from existing sources to answer your query. It’s fascinating that the system can do this and learn grammar without explicit rules, but it’s essential to remember that it cannot think for itself, is not sentient, and cannot empathize. However, all the data used to train these algorithms was taken without permission or proper compensation. The system is flawed, and yet many companies are reusing and tweaking this flawed foundation just to launch a product. There are constant improvements, but giving the system more credit than it actually deserves is dangerous.
Another interesting topic was AGI and why we fear it. Artificial General Intelligence (AGI) might be possible in the future, but there’s still a long way to go. The discussion about the ladder of generality was especially interesting. We began by creating specific hardware for specific tasks, moved on to general-purpose computers that required specific software, and now, with machine learning, we only need to assemble a dataset and devise a learning algorithm suited to that data. With deep learning, even the need to create different algorithms has diminished – researchers use the same neural network and adjust it based on the task at hand. Now, with tools like ChatGPT, AI is accessible to everyone, whether or not they can program. This accessibility could lead to the creation of AI agents capable of performing complex tasks, paving the way for the next step in generality. This is a remarkable advancement, but the lack of specialization may backfire.
The section on Content Moderation AI highlights the limitations of algorithms, policies, and policymakers. I learned a lot about ghost-workers and how platforms like Facebook, X (formerly Twitter), and others operate. These workers, often located in developing countries, are tasked with reviewing and moderating disturbing and harmful content—work that is emotionally taxing and grossly underappreciated. Despite the reliance on AI for content moderation, these systems often fall short in understanding context, nuance, and cultural differences, requiring human intervention to make final decisions.
One important thought that was quoted stayed with me: “If we lack a scientific understanding of some aspects of AI, it’s because we’ve invested too little in researching it compared to the investment in building AI. And if we lack an understanding of a specific AI product, it’s usually because the company has closed it off to scrutiny. These are things we can change.”
AI tools are truly revolutionary. Like automation, they will undoubtedly make some jobs obsolete, but as humans, we’ve always adapted. Operators will shift to becoming supervisors, and more dangerous and strenuous jobs can be offloaded. This has always been our goal—to work less, earn more, and use our minds for creative tasks and inventing better things. Also it may open doors to things we have not yet imagined as yet. “The advent of AI pushes us to confront whether there is a form of logic humans have not yet achieved or cannot achieve, exploring aspects of reality we’ve never known and may never directly know.”
© 2025 Sindhuja Cheema Enzinger. All Rights Reserved.