Tech Book Club #001: Supremacy by Parmy Olson
Condensed recap of the first session of the Tech book club
11/30/20252 min read


📚 About the Book
Supremacy explores how Sam Altman’s OpenAI and Demis Hassabis’s DeepMind built and developed two different companies with two different visions around AI development.
The book dived into the psychology and core philosophies that shape their corporate strategies:
Sam Altman’s Approach: Characterized by a non-conformist, learned early the power of powerful connection and network. He was a poker player and had a “startup” style (from a Y Combinator background). His strategy was to release an incomplete model (ChatGPT) and use real-world feedback to 10X its growth. His goal was creating an AI that leverages and support human skills.
Demis Hassabis’s Approach: Described as having a more academic. Had early interest in complex “God view” games and was a chess player. His company, which created AlphaGo, had a more hierarchical structure. His focus is on long-term, foundational research. His goal is AI that drives science advancement.
💬 Some of the discussion and insights from the session
1. Leadership, Vision, and Culture
In the first section of the meeting, we discussed how each founder’s personality, childhood experiences, personal interest and philosophy shaped their company: OpenAI leaning toward a more experimental and bold approach versus DeepMind’s academic and structured environment, with a big focus on research and prestige. Those differences revealed throughout the book how company culture can directly influence innovation within a company.
Something important to note: Deepmind was bought by Google and was often associated as “one of the Google” companies. While OpenAI had also multiple ties with different CEOs of large tech companies, they went on the road of building a “partnership” with Microsoft. Those different choices would later lead to a different path and reputation for each company.
2. Innovation and Strategic Risk
We discussed OpenAI’s strategy of releasing an iterative model of ChatGPT to collect real-world feedback. We also touched on the reasons why Google/DeepMind did not use their own innovation developed internally.
This dynamic raised important questions about risk tolerance and the matrix [giant tech corporate x innovation speed].
3. Power and Ethics
We then went a bit more on the influence, and power of investors in AI development. We reflected on the role of investors and corporate stakeholders in both financial and strategic decisions to push the global AI race.
Another aspect was the compute element: while it is an undissociable element of AI development, the computer power to run those AI and training has been continuously rising (the amount required and the money needed to push those). The question is also whether we will soon reach the highest peak and would it be possible to need even less compute power in the future?
From there, the “AI doom” rhetoric was the natural next topic: was the increased number of debates on the topic brought up by some company for public awareness or just a strategic marketing leverage? We also debated what “intelligence” means for each of us: is it physical intelligence (such as finding ingredients in a new kitchen) or creative intelligence that drives human innovation? We had different views but this is what we like to have in book club discussions.
Someone raised another interesting question: How close are we to midnight on the AI Doomsday Clock?
4. The Future of AI and Monetization
In the last part of the session, we continue on the risk related to AI but from a market transformation standpoint: could we move toward subscription-based AI ecosystems, hyper-personalized ads, or AI agent search experiences?
I highlighted the comparisons with the Netflix trap: growth strategy based on mass adoption and leading the market like an “irreplaceable” tool.
We also highlighted the lack of information on the training of those models and it would be interesting to have a side by side comparison with Deepseek!