5 min read

Book Review: “The Coming Wave” by Mustafa Suleyman

Book Review: “The Coming Wave” by Mustafa Suleyman
Image generated using OpenAI's DALL-E 3

Hello everyone!

I just finished reading Mustafa Suleyman and Michael Bhaskar's book “The Coming Wave: AI, Power, and the 21st Century's Greatest Dilemma”, and decided to share the impressions it gave me with you. This is the first review I am publishing here, but I will try to turn this into a tradition. If you want to hear my thoughts on any book, movie, or other creative format exploring Web3 and AI topics, don't hesitate to reach out and let me know.

Thank you for being here! Let's dive in!


As soon as I heard about “The Coming Wave” being published, I hurried to buy it and start reading it, mainly because Mustafa Suleyman is a distinguished AI entrepreneur with numerous impressive achievements in the field. He co-founded DeepMind and led it for many years, including through its acquisition by Google. One of the company's biggest successes and a breakthrough in the world of biology, namely the AlphaFold protein folding solution, was reached under his management. After leaving DeepMind in 2022, Suleyman co-founded Inflection AI - the lab that built the chatbot Pi and recently released the Inflection-2 model that outperforms Meta's Llama 2 and Google's PaLM 2.

However, let me tell you right away that reading that book was a disappointing experience. The author dedicates 300+ pages to eagerly preaching how scary AI can be and why we must do everything in our power to “contain” its proliferation. I had to read through all of them to finally understand how Suleyman envisions that “containment” accomplished - through strong international regulation and an absolute prohibition of open-source AI development:

“On the one hand, total openness to all experimentation and development is a straightforward recipe for catastrophe. If everyone in the world can play with nuclear bombs, at some stage you have a nuclear war. Open-source has been a boon to technological development and a major spur to progress more widely. But it's not an appropriate philosophy for powerful AI models or synthetic organisms; here it should be banned. They should not be shared, let alone deployed or developed, without rigorous due process.”

Let me translate this: limit AI research and development to a few tech conglomerates, or we'll all be doomed. How convenient, don't you think? Especially for a person who spent half of his professional life at Google. It would be a surprise if he didn't back the closed-source camp.

Suleyman constructed several scary scenarios and gave various examples to try to prove his point, but he failed to convince me. The fact that he contradicts himself on numerous occasions did not help either.


Suleyman's main thesis is that every useful technology ever invented has gradually become cheaper and more accessible, and quickly proliferated throughout the civilized world, even if authorities tried to stop it. He goes down in history to illustrate how rulers attempted to halt or at least delay a technology from spreading, but no one has ever succeeded in containing that wave.

Yet, he cites nuclear weapons as the only technology the world successfully managed to contain, once the existential threat they represented was realized. But, the author himself goes on to enumerate all the countries that have acquired some form of nuclear weapons despite the international non-proliferation treaties. North Korea, Iran, Syria, China, India, Pakistan, and many more have accumulated nuclear arsenals, and we are right now in the middle of a war where the risk of a nuclear conflict is a major concern.

Given all this, it is quite cynical to pretend that nuclear weapon technology has been safely “contained”. Moreover, claiming that AI can and must be contained in the same way is borderline stupid. AI is neither as expensive as nuclear weapons, nor requires such considerable and specialized expertise. If we couldn't stop weapons, how could we stop software?

In addition, Suleyman omits to mention one important detail about nuclear technology - that the same containment strategy applied to its military use and the numerous fears it perpetuated have spread over to its civil application for power generation. As a result, the world spent decades using fossil fuels and contributing to environmental deterioration instead of building cleaner nuclear plants, even though it has been scientifically proven that nuclear is the safest way to create energy.


By and large, all Suleyman does in this book is list various military conflicts or terrorist attacks the world has suffered, and then add: “Imagine how worse it would have been if these bad actors had access to powerful AI”. And I am not here to argue that it wouldn't, I just don't see how we can stop them from using AI. One of the events the author mentions is the WannaCry ransomware attack, where North Korean hackers stole and launched a Windows exploit initially developed by the US National Security Agency (NSA). If such software can leak from one of the most robustly secured institutions in the world, how can Google or Microsoft guarantee that AI research will be kept safe? Suleyman does not answer that question.

There was one other contradiction that surprised me. On the one hand, Suleyman declares he doesn't believe AI would eventually gain consciousness. Rather, he is more pragmatic, yet a bit naive, in his conviction that humans would become more and more comfortable giving up their control, thus inadvertently granting AI its autonomy. On the other hand, as with so many other doomers, sci-fi-induced concepts sneak into his chain of thought, and he nonchalantly declares something along the line of “What do you do when the tool comes to life”. I guess the doomers persuasion is not effective enough without attaching anthropomorphic traits to AI.


Probably the most irritating part of this book is Suleyman's condescension toward anyone who doesn't agree with him. From the very beginning of the book, any opposing opinion is denigrated and made to sound irrational, stupid, and closed-minded. The author has even invented a term for people who renounce AI-powered existential risks:

“This widespread emotional reaction I was observing is somehting I have come to call the pessimism-aversion trap: the misguided analysis that arises when you are overwhelmed by a fear of confronting potentially dark realities, and the resulting tendency to look the other way.”
“Pessimism aversion is an emotional response, an ingrained gut refusal to accept the possibility of seriously destabilizing outcomes. It tends to come from those in secure and powerful positions with entrenched worldviews, people who can superficially cope with change but struggle to accept any real challenge to their world order.”
“Those who dismiss catastrophe are, I believe, discounting the objective facts before us.”

Call a reaction “emotional” and it is automatically invalidated, women would know. What Suleyman says here is that if you are not afraid, then you are not thinking straight. If you are not afraid, you refuse to look reality in the eye and there is something wrong with you. If you feel optimistic about the future and the progress of AI technology, you don't have all the facts.

I don't know about you, but I believe such one-sided rants are of no use to anyone. They don't contribute to a real discussion and to reaching adequate agreements. If history taught us anything at all, it is that the centralization of excessive amounts of power and resources in the hands of a few chosen ones usually leads to dangerous outcomes. On the other hand, democratizing access to knowledge, nurturing the freedom of experimentation, and boosting a competition of ideas are a universal recipe for achieving sustainable progress. Why would it be any different with AI?