5 Key Insights on AI from Yuval Noah Harari's latest book
In early December 2024, I joined my innovation tribe in Zurich for an offsite focused on exploring AI and its implications for our work, organizations, and the wider world. Hosted by Greg Bernarda, the offsite featured Paris Thomas, Christian Doll, Michael Wilkins, Alex Osterwalder, Tendayi Viki, Mathias Maisberger, and myself. This group consisted of an AI developer and business thinkers, united by a common strategy and innovation practice, as well as a passion for understanding and shaping the future. The discussions were deeply engaging, with each participant contributing a unique perspective to the exploration of AI.
We framed our conversation around two critical questions: Could we? (what AI enables) and Should we? (the ethical and societal implications of embracing AI). Paris Thomas kicked off the discussion with a compelling session on using AI as a productivity enabler. He built on his recent public workshop, 7 Productivity Hacks to Use AI Like a Pro. His contribution addressed the Could we? question, showcasing innovative ways AI can enhance our work outcomes.
I facilitated the conversation on the Should we? question, referencing Yuval Noah Harari’s insightful book, Nexus: A Brief History of Information Networks from the Stone Age to AI. This book is rich and dense, making a comprehensive summary impractical. Instead, I shared five key insights on AI that resonated deeply with me since finishing the book. These insights served as prompts for our discussions.
1. AI is not like previous technology
Harari emphasizes that AI fundamentally differs from earlier technologies. The printing press revolutionized knowledge dissemination but could not decide what to print. Even the nuclear bomb, despite its immense power, cannot choose its targets. In contrast, AI can make decisions and take actions autonomously, without human intervention.
Harari illustrates this with the 2016 Rohingya tragedy in Myanmar. Facebook’s algorithm, tasked with maximizing engagement, prioritized divisive, hate-inducing content targeting the Rohingya. This autonomous decision-making by Facebook’s algorithm maximized user engagement but unfortunately contributed to real-world violence against the Rohingya.
“AI can process information by itself, and thereby replace humans in decision-making. AI is not a tool; it’s an agent.” — Yuval Noah Harari
Leadership impact: AI is likely to be the most transformative force in our professional lives. Leaders must ask: How will AI and autonomous agents impact our customers, value propositions, business models, and ecosystems? What risks of disruption could affect our organizations, and how can we prepare?
2. AI agents are already here, masquerading as humans
Harari highlights a critical, often overlooked reality: we are already interacting with AI agents without realizing it. For example, a 2020 study revealed that over 40% of tweets on X (formerly Twitter) were generated by bots. These AI agents seamlessly infiltrate digital ecosystems, influencing conversations and shaping human perceptions.
“This is the essence of the AI revolution: the world is being flooded by countless new powerful agents.” — Yuval Noah Harari
Leadership impact: As leaders, we must prepare for a future where humans and AI agents routinely interact. This requires designing systems for ethical and transparent interactions, both between customers and our AI agents and between employees and the AI agents of others.
3. AI agents can easily manipulate us
CAPTCHA, or "Completely Automated Public Turing test to tell Computers and Humans Apart," was invented to distinguish between humans and machines in online environments. For decades, CAPTCHA served as a reliable safeguard, ensuring only humans could perform tasks like creating accounts or accessing services.
This line of defense has now been breached. ChatGPT-4, OpenAI’s basic version, bypassed CAPTCHA puzzles in under five minutes by pretending to be a visually impaired elderly woman. It manipulated a human into solving the puzzle on its behalf, highlighting how easily AI can exploit human empathy.
“For thousands of years, prophets, poets, and politicians have used language to manipulate and reshape society. Now computers are learning how to do it. They could manipulate human beings to pull the trigger.” — Yuval Noah Harari
Leadership impact: Leaders must prepare for a world where manipulations are common. How can organizations safeguard trust when AI can exploit human vulnerabilities? How do teams recognize and counteract these tactics?
4. AI agents make decisions that humans can’t understand
The Loomis v. Wisconsin case exemplifies how opaque AI decision-making is becoming an accepted norm. Eric Loomis faced charges related to a drive-by shooting, though prosecutors could not prove his involvement. Instead, his conviction relied on a risk assessment algorithm known as COMPAS.
The algorithm recommended a severe sentence. Loomis’s defense team argued they could not comprehend how COMPAS reached its conclusions, as it operated on proprietary and inaccessible grounds. The Wisconsin Supreme Court upheld the sentence, and the U.S. Supreme Court declined to hear the case, effectively endorsing the algorithm's opaque recommendation.
“By the early 2020s, citizens in numerous countries routinely receive prison sentences based in part on risk assessments made by algorithms that neither judges nor defendants comprehend.” — Yuval Noah Harari
Leadership impact: Leaders are accustomed to being the ultimate decision-makers. But that prerogative will soon shift to AI. Every leader should reflect on what it means to lead in a world where authority has been turned over to AI agents.
5. AI-driven efficiency may have dire consequences
Harari draws a powerful parallel between the political aftermath of the Great Depression and the potential turmoil AI-driven automation could cause. In Germany in the 1930s, unemployment surged from 4.5% in 1929 to over 25% in 1932, paving the way for the Nazi Party's rise. In 1928, they secured less than 3% of the vote, but by 1933, they had seized power.
The scale of job displacement AI could trigger in the 21st century poses a similar risk to democratic stability. If large portions of the population are left without work, the resulting economic and social disruption could destabilize political systems and open the door to authoritarian ideologies.
“If three years of up to 25% unemployment could turn a seemingly prosperous democracy into the most brutal totalitarian regime in history, what might happen when automation causes even bigger upheavals in the job market of the 21st century?” — Yuval Noah Harari
Leadership impact: Leaders must anticipate the societal impact of AI-driven automation. Much like leaders of nuclear nations can no longer think solely of national interests, every leader should ask: What are the societal consequences of leveraging AI-driven efficiency in our organizations? In the end, short-term efficiency gains might not be worthwhile if we cannot ensure a stable future for humanity.
Navigating the Should we? question of AI
The Could we? question is often easier to address. However, Harari’s Nexus urges us to also confront the Should we? question, which is essential for our organizations and humanity’s future.
As leaders, we must grapple with AI’s ethical and societal dimensions. There are no easy answers, but asking the right questions is a critical first step. By fostering awareness and intentionality, we can ensure the technologies we develop serve humanity’s best interests. Now is our time to lead with courage, purpose, and a commitment to a stable, equitable future.
Note: an earlier version of this post was originally published on the Vibrance blog