Skip to content

Between Hype and History: Conversations with the AI Elite

REVIEW ESSAY
The Scaling Era: An Oral History of AI, 2019–2025
by Dwarkesh Patel with Gavin Leech
Stripe Press, 2025, 248 pages

The oral history as a form has traditionally concerned itself with those outside the halls of power. Studs Terkel interviewed steelworkers, Alessandro Portelli documented the lives of coalminers, and the Works Progress Administration collated the narratives of the formerly enslaved. Its premise rests on the notion that official archives have missed something essential about how reality actually unfolded, and that the view from the ground differed meaningfully from the view in the boardroom.

Writer and podcaster Dwarkesh Patel’s The Scaling Era: An Oral History of AI, 2019–2025 takes the opposite approach. This oral history surveys the people at the very center of power: Mark Zuckerberg of Meta, Satya Nadella of Microsoft, Demis Hassabis of Google DeepMind, and another dozen researchers, engineers, and executives involved in the building of frontier artificial intelligence. This is an oral history of the elite, conducted by someone open-minded, if not sympathetic, to their project.

At first glance, this might seem unnecessary, given the hundreds of annual CEO profiles and founder hagiographies. Yet Patel’s interviews are not much focused on the lives or accomplishments of his subjects. He is much more concerned with the ideas they hold, the trendlines they observe, and what it all means for the future of humanity. The purpose of his history is to pull back the curtain on what a cohort of Silicon Valley giants think is happening and what AI has in store for us. It is a tall task, for very few have succeeded in faithfully communicating the intellectual environment that surrounds the AI industry. The written record as captured by outsiders is poor. To really understand what’s currently transpiring in the companies producing frontier AI requires a presence in the room with the engineers building it.

Echoes of the Lunar Society

The Scaling Era is the product of more than one hundred recorded conversations, which took place on Patel’s podcast—and innumerable, casual, unrecorded ones—from mid-2020 onward. The podcast initially had little to do with AI. Called The Lunar Society, it intended to harken back to the eighteenth-century Birmingham club of the same name where Erasmus Darwin, James Watt, Josiah Wedgwood, and other industrialists and natural philosophers gathered monthly to discuss science, invention, and the world they were remaking. Starting with members of the George Mason University economics department, Patel, then still a student at the University of Texas at Austin, jumped from episodes on Progress Studies to why children should be taken more seriously, often recorded from his bedroom.1 Many of his early interlocutors hailed from academia or were authors eager to share their ideas with the world.

While Patel never fully abandoned his appetite for topical breadth in conversation, over time, his interviews increasingly focused on one subject above all else. Artificial intelligence, still in the early days of the scaling era at the launch of The Lunar Society, was starting to overtake San Francisco. A few short months after ChatGPT’s launch in November 2022, Patel released interviews with some of the field’s most famous thinkers, technologists, and venture capitalists, including Eliezer Yudkowsky, Ilya Sutskever, Marc Andreessen, and Nat Friedman. In the subsequent two years, Patel would go on to stake out his position as one of the most important technology interviewers of our time. By the time Patel had hit twenty-five, many, myself included, had come to look to his conversations for how to think through this moment in AI. It is now one of the most popular technology podcasts in the world.

These conversations would often be recorded in Patel’s studio in San Francisco, the epicenter of AI. The town, just as the AI universe he documents, is small in a way that would surprise most outside observers. Many of the researchers and executives at Anthropic, OpenAI, and the handful of other organizations involved in advanced AI know each other. In many instances, they graduated from the same programs, worked at the same companies, or at some point found themselves at the same house parties. All seven of Anthropic’s founders once worked at OpenAI. Moreover, many of them live within a few miles of each other on a single peninsula. Information travels through conversations, Signal chats, Google Docs, and informal blogs—and, over the past years, Patel’s podcast.

This scene has a distinctive epistemology. There are strong norms around reasoning from first principles, rather than deferring to consensus, and an openness to the possibility that the future may depart radically from the past. If evidence suggests that AI will improve generation upon generation given more data and computational power, that is taken seriously. San Franciscans follow the logic even if the implications sound like science fiction. Weirdness is not by itself disqualifying. The result is a curious blend of technical rigor and speculation. The same person might publish a well-researched paper on model interpretability and the next day go on a podcast to theorize about superintelligence.

Amid these general cultural similarities, San Francisco certainly clusters around different sets of values. The major labs—OpenAI, Google DeepMind, Anthropic, Meta, xAI—have sharply different internal cultures and intellectual sensibilities, and there is pride associated with building at each of them. Surrounding these institutions is a dense start-up and venture ecosystem oriented around the frontier labs. New companies emerge to build tools on top of foundation models or to commercialize model access. Some are respected as serious technical efforts, while others risk being overtaken by the next order of magnitude in scaling. A small number of investors are deeply engaged with the technical substance of AI, while many others have been much slower to internalize what genuinely transformative progress would mean for familiar business models. Though its precise dynamics may strike the outside observer as alien, this ecosystem, like all other industries, is shaped by institutions, incentives, and personalities.

Patel is unusually well-equipped to be a conduit into this world as it builds and thinks through the implications of AI in real time. He is not and never has been employed by an AI lab, freeing him from financial conflicts of interest or an institutionally mandated worldview. He can take the prospect of transformative AI seriously while still interrogating whether it is real. At the same time, armed with his computer science degree, he has the technical background to understand through-and-through what his guests are saying about the technology. He can interrogate how models behave, why the scaling curves behave as they do, what the interpretability research actually shows, and where the theoretical gaps remain. His approach has little in common with journalism as is commonly practiced. He rarely touches on hot-button issues, even when in the room with founders in the media limelight. This has in turn built the kind of trust that allows his subjects to think out loud about the underlying developments that matter in the long run.

What has emerged from his long-form interviews—some stretching to more than seven hours—is a mosaic of perspectives on the development of AI. Most, though not all, of Patel’s guests take the possibility of extremely powerful AI very seriously. Many think it might arrive within the next decade. They are willing to entertain speculation about what human-level AI across all domains of life might look like, and they theorize about what vulnerabilities AI might exploit to take over the world. Aside from this general openness to the idea that radically transformative AI could arrive soon, the guests’ perspectives diverge significantly. There are sharp disagreements among Patel’s interlocutors about fundamental questions: How will capabilities advance? Is scaling all you need? What are the biggest risks? The conversations are marked by the authentic uncertainty of people grappling with unknowns.

Like the original Lunar Society, these are men discussing practical problems with vast implications. The Birmingham industrialists were building the engines and factories that would transform Britain and then the world. Today, the participants in Patel’s conversations believe they are doing something comparable—if not even more important—and they may be right. Now, after transcribing, reshuffling, and framing these many interviews, the result is a primary source document of a founding moment, produced even as the founding is still underway.

The Significance of Scale

The “scaling era” of Patel’s title refers to a period defined by an empirical discovery that AI models improve predictably as you increase their size, training data, and computational power. The now famous “scaling laws” describe optimal combinations of data, model parameters, and compute, but the underlying insight is much simpler: the path to more capable AI might be less about fundamental architectural breakthroughs than about feeding ever larger resources into existing approaches. And indeed, while the future of scaling is hotly contested, scaling has turned AI models from useless to products that generate tens of billions of dollars today. Patel organizes his exploration of these phenomena and the many ensuing questions they invite thematically rather than chronologically in The Scaling Era.

After a dense primer on basic machine learning concepts, Patel moves through eight chapters covering the core obsessions of the field: scaling, inputs, evaluations, model internals, safety, potential impacts, the pace of progress, and timelines. Each chapter opens with Patel’s own synthesis before fragmenting into excerpts from his interviews. Rather than printing twelve-thousand-word transcripts, he has curated ex­changes around specific questions, moving fluidly between speakers. Each chapter features around a dozen different interview fragments.

The effect is something between a textbook and impressionism. You might read Anthropic CEO Dario Amodei’s thoughts on why scaling laws hold (his answer: “we still don’t know”),2 then move immediately to his cofounder Jared Kaplan’s complementary perspective,3 then to Ilya Sutskever on whether scaling will transfer from next-token prediction to genuine reasoning,4 then to Carl Shulman on what biological evolution might tell us about the limits of intelligence.5 The interviewees are not introduced before jumping into their exchanges; you can flip to the appendix for biographical details if you want them.6 Instead, the book focuses on acquainting the reader with concepts. Patel wants you to know what a parameter is and what the differences are between pre-training and post-training. These technical terms are covered in the primer, which functions as a kind of dramatis personae for the rest of the book.7 The ideas, not the personalities, are foregrounded.

What comes out in The Scaling Era, just as it does in the podcasts that contributed to it, is uncertainty. The disagreements among Patel’s subjects reflect genuine difficulty in predicting where this technology is headed: some believe artificial general intelligence (AGI) will arrive within five years, others think decades; some believe current safety approaches are adequate, others that we are building systems we cannot reliably control. If this were merely hype, we might expect more uniformity. Patel has captured something messier. His is a snapshot of a cohort of technical elites who do not fully understand what they are building but have reason to believe it might be the most important innovation ever to be built.

It should be noted that the book does not capture the full range of opinion, even within Silicon Valley. We never hear from, for example, Yann LeCun, Meta’s former chief AI scientist, who has argued that large language models are a dead end, or Gary Marcus, who has built a career on the case that scaling will hit fundamental limits. François Chollet, the creator of the Keras deep learning library and ARC-AGI, serves as the primary dissenting voice on whether models in their current form demonstrate true intelligence.

Since publication, Patel has interviewed figures like Andrej Karpathy and Richard Sutton, who are more cautious about what scaling alone can achieve. But while Karpathy and Sutton are sometimes cast as skeptics, in reality they dispute whether AI will trigger the singularity in a matter of years or merely be a trillion-dollar industry.

But this is all to say that the The Scaling Era is history with a perspective. It is best understood as a product of a particular cohort: those who believe that the leading AI labs may soon build powerful, even superintelligent, AI. This is a community talking to itself. Therein lies tremendous value, but readers should understand they are being invited into a worldview—held by those Patel perhaps deems most worthy of consideration—instead of a complete survey of the field.

Stripe Press has produced a beautiful object to house all this information. Narrow columns leave generous margins for definitions and annotations, footnotes populate the bottom halves of pages, and graphics come to the rescue where words fall short. The structure serves a pedagogical purpose. Where the podcasts place the listener as a passive third party to conversations that may go over one’s head, the book supplies resources to understand what is being discussed. Terms are defined as they appear, and context is provided where the interviews assume shared knowledge. For a reader coming in cold, the book offers handholds that the podcasts do not.

This is not to say that The Scaling Era is an easy read. The content is denser even than many of the original interviews. Patel has selected the most substantive and challenging passages and strung them together with minimal filler. The effect is intellectually and attentionally demanding in a way that most writing on technology is not. And while Patel has somewhat cleaned up his interlocutors’ transcripts, their style and composition reflect the spoken word accurately and are far from polished prose. Many sentences run on, interrupt, and restart. This is where The Scaling Era is simultaneously at its weakest and strongest in communicating realism to its audience.

The Scaling Era’s last full chapter is its shortest: a compilation of answers to the question of when interviewees expect AGI. Responses range from 2025 to 2040, with some declining to answer at all.8 One might reasonably ask what such a range tells us. Fifteen years is a long time, and others’ refusal to respond indicates that even insiders see the question as unanswerable. Critics of this approach have long argued that quantification conveys a falsely confident precision. San Franciscans might retort that they are merely giving the mean of a probability distribution. Regardless, the apples-to-apples comparison across all subjects is in this case clarifying and satisfying. Across twenty interviews with individuals with varying expertise and temperaments, the last chapter is the one moment of direct comparison. The takeaway is imprecise but very much significant: this is a group that sees timelines as short enough to speak of them in years, rather than generations. It is a fitting end for a book unafraid to ask the questions its subjects are actually obsessing over, even when the answers risk seeming absurd to outside observers.

The Epistemics of AGI

Patel’s book hit the shelves in October 2025 during a time when it seemed like the world was sharply diverging in its assessment of AI hype. After the disappointing announcement of OpenAI’s GPT-5 in August, many Silicon Valley watchers found new grounds on which to claim that scaling was hitting a wall, suggesting that the AI labs would never be able to build superhuman technology.9 Perhaps AI would conform to the frame set by Arvind Narayanan and Sayash Kapoor, that is, become just a “normal technology,” or it may disappoint entirely.10 As usual, as soon as AI was “over,” the labs were back. Gemini 3 Pro, Claude Opus 4.5, and GPT 5.2 were arguably impressive models. According to some of the most stable benchmarks, AI capabilities continue to advance along the expected trend lines and even show some sign of acceleration; the Scaling Era is certainly not over yet.11

The persistence of these trendlines is part of what distinguishes AI from past hype cycles. Empirically, the scaling laws have held for over a decade across many orders of magnitude, with no clear ceiling in sight. But there are also theoretical reasons to take the bull case seriously. Where most technologies make specific tasks more efficient, AI is a tool for building better tools. If it continues to advance, it could eventually accelerate the development of AI itself, creating a feedback loop unlike anything in technological history. Indeed, the stated goal of leading AI labs is to “close the loop” and automate AI development. We may already be witnessing a technological transformation on the scale of the industrial revolution. The theoretical possibility of recursive self-improvement suggests it could go further still.

In San Francisco, you have researchers at Anthropic, OpenAI, Google DeepMind, and a handful of other organizations unsure whether they are building a product that will cure cancer and restore American industrial supremacy or end the world. While far from all believe that artificial superintelligence will be created in under a decade, many leading researchers and lab leaders do. Labs are tasking hires to focus on problems that would have seemed absurd a decade ago: how do you maintain control over a system as it becomes more capable than its operators? How can you tell if a machine is lying to you? These are not fringe concerns. They are being actively addressed across frontier AI labs.

In Washington, onlookers struggle to make sense of these developments. There is media coverage of the AI boom, handwaving at “AI enabled” this or that, and roundtables galore. But the substance of these conversations is more likely to be about disemployed truck drivers than the “countries of geniuses on a datacenter” that are discussed out West. Pattern matching abounds: AI is a cesspool for misinformation like social media, a scam like crypto, a hype cycle like the other “frontier technologies.” Few consider whether Patel’s subjects—winners of Nobel Prizes and Turing Awards—might actually be right.

This is not a problem unique to AI, but it may be unusually consequential here. In most technical domains, the gap between those who build technologies and those who govern them is manageable. If policymakers misunderstand the finer points of spectrum allocation and pharmaceutical manufacturing, society often finds ways to muddle through. For most areas, we have created institutions and feedback mechanisms that operate in the background to provide expertise and ensure efficient governance. But in the case of AI, we face the dual problems of lacking institutional expertise anywhere other than in the major AI labs themselves, as we witness the technology’s capability increase exponentially. If the technology develops along the trajectories that many researchers consider likely, the window for shaping said trajectory may be quite narrow. Yet our political elite is largely oblivious to what is actually happening.

Skepticism in the face of foretold revolutions is certainly reasonable. Crypto was said to be able to remake finance, and we have yet to see the metaverse upend social life. The incentive structures of industry push toward overstatement, to which many policymakers are healthily inoculated. There is wisdom in pattern-matching. For anyone whose job involves allocating attention efficiently, tuning out declarations of the eschaton has been a sensible heuristic. Empirically, when start-ups fail at a rate of roughly 90 percent, such responses are right more often than wrong. But how many years of continued scaling is enough to begin changing one’s mind?

The difficulty is that heuristics break down with tail events. We should be open to the idea that, one day, Silicon Valley may produce a set of technologies so powerful that policymakers ought to take them much more seriously than even the internet. Perhaps the proverbial wolf the shepherd cried for will never appear, but in the meantime, we need a more sophisticated method of parsing developments than inferring from past events. We must ask ourselves what it would actually look like if something real were emerging from Silicon Valley. Presumably, it would look like brilliant people devoting their careers to building it with hundreds of billions of dollars in capital investment. Capability gains might surprise even the people building the systems. There would be handwringing about safety concerns. It would look, in other words, quite a lot like the present moment.

To be “AGI-pilled” is not to hold a specific prediction about dates or architectures. It is to look at the empirical trendlines and theoretical possibilities and conclude AI will be a very big deal, maybe a bigger deal than any other technology. The exact rate of transformation is difficult to predict, and the frictions of diffusion may slow things down. But San Franciscans talk about the future probabilistically. They may not be fully confident in any single scenario, but they place meaningful weight on outcomes that would be wildly transformative. What exactly those outcomes look like, even the engineers can only gesture at. This is not the language of snake oil salesmen, but of people building something they don’t fully understand.

The scaling curves could plateau, and Patel admits as much. The current architectures could hit walls that require breakthroughs no one knows how to achieve, and timelines could stretch to decades or perhaps never end. The point is not that AGI is certain, but rather that the people closest to the technology are entertaining extreme scenarios as a real possibility. They believe something very large might be happening and are unsure whether to be excited or terrified.

The Missing Policy Manual

Perhaps surprisingly, The Scaling Era has quite little to say for policymakers. Though Patel draws upon economics, history, biology, philosophy, and more to sketch out what potential futures look like, at no point does he entertain debates about export controls or state-level preemption. This approach lets the book maintain its status in the empirical realm and avoids being sullied by the normative.

This is not unique to Patel. Silicon Valley has generally treated policymaking as someone else’s responsibility, outside the scope of its role in the world. To a large extent, the founders and engineers of the West Coast self-selected into technical professions because they enjoyed building as a way to fix problems. Marked by a certain libertarianism, their instinct is to reach toward technical solutions. When those fall short, as they may well under the force of civilization-altering technology, there are few contingency plans. San Francisco has little practice thinking about the mess of governance.

That is not to say that Patel has balked from politics entirely. In other writings, he has argued that export controls on semiconductors and manufacturing equipment serve both national security and safety goals and has emphasized the importance of preventing adversaries from stealing model weights. Some of his guests, like Leopold Aschenbrenner, who sketched a vision of a government-backed AI project built with security akin to the Manhattan Project, have at times raised geopolitical questions on the podcast directly. But these threads remain scattered across interviews and blog posts, never synthesized into a policy framework. In this, Patel merely reflects his environment. Just as San Francisco’s comparative advantage is not governance, Patel’s is not policy analysis. His strength lies in surfacing how technologists think, ceding to other venues the responsibility of regulatory debate. This division of labor is reasonable enough, but the result is frustrating. Those who understand the technology best remain largely disengaged from politics, while policymakers watch the developments in San Francisco with confusion.

While not found in the pages of The Scaling Era or on the podcast, there is a nascent policy community that takes transformative AI seriously. They too are divided. On one end are those who call for a stop to frontier development or at least a pause until the labs can guarantee safe deployment; on the other are those who prefer to forge onward with technical progress and address any governance problems as they arise. Some, like my colleague Dean Ball, argue that one cannot make massive decisions like the regulation of the most important technology preemptively; the standards for evidence must be much higher than an expected value calculation would otherwise indicate.12 Others argue that a pause is nearly impossible to coordinate globally and risks ceding ground to China. The pause advocates counter that such a laissez-faire approach cannot be taken when civilizational wellbeing is at stake.

For years, this debate remained somewhat abstract. In practice, most AGI-pilled policymakers coalesced around “no regrets” policies: interventions worth pursuing regardless of whether transformative AI arrives. These commonly included things like increasing energy supply, encouraging the development of third-party evaluations, and improving cybersecurity. The intent was to create a policy playbook whose outcomes are defensible across a range of scenarios, even if the scaling curves plateau. But as the AI capabilities improve, some have suggested that no-regrets policies may no longer suffice. Jack Clark, long a proponent of being cautious in regulating AI too soon, began speculating in summer 2025 that no-regrets policies might not be nearly extreme enough if we expect something like AGI in just a few years.13 The Scaling Era offers no resolution, neither to the questions of prudent policymaking nor to the timelines under which it occurs. Instead, it serves as a record of the feeling of vertigo under which future policy choices must be made.

There is a deeper reason The Scaling Era feels so uncanny as a historical document. If AI turns out to be a useful but unremarkable technology, these conversations will be remembered as indulgent and self-obsessed. If AI proves transformative, these interviews may well read like early nuclear debates. Either way, Patel has preserved a record of elites reasoning in public before it is clear where the scaling laws will take us.

For readers inclined toward engagement, the present moment offers an unusual opportunity. Unlike past transformations of comparable potential magnitude, this one is unfolding in relative openness. Even though the major AI labs publish less than they did five years ago, new capabilities are demonstrated publicly every few months, and there is a thriving scene of builders and hobbyists who discuss developments freely. The people building these systems are generally willing to explain what they are doing. This transparency may not last, but for now, at least, the views, visions, and conversations of the AI elite remain open for anyone willing to listen.

This article originally appeared in American Affairs Volume X, Number 1 (Spring 2026): 214–25.

Notes

1 Dwarkesh Patel, “Dwarkesh Podcast Archive,” accessed January 2026.

2 Dwarkesh Patel, The Scaling Era: An Oral History (San Francisco: Stripe Press, 2025), 23.

3 Patel, The Scaling Era, 24

4 Patel, The Scaling Era, 26.

5 Patel, The Scaling Era, 27.

6 Patel, The Scaling Era, 179–84.

7 Patel, The Scaling Era, 15–18.

8 Patel, The Scaling Era, 167–72.

9 Melissa Heikkilä et al., “Is AI Hitting a Wall?,Financial Times, August 18, 2025.

10 Arvind Narayanan and Sayash Kapoor, “AI as Normal Technology,” Knight First Amendment Institute, April 15, 2025.

11 METR, an independent AI evaluation organization, measures how long AI systems can work autonomously on software and research tasks. They specifically look at the length of tasks (as measured by human completion time) that AI can complete with 50 percent reliability. If this trend continues, METR projects that AI agents will be able to independently complete tasks that currently take humans days or weeks. The trendline has held remarkably steady, doubling roughly every seven months, even as models have progressed from answering simple questions to completing multi-hour engineering tasks. See: “Measuring AI Ability to Complete Long Tasks,” METR, March 19, 2025.

12 Dean W. Ball, “How I Approach AI Policy,” Hyperdimensional, September 18, 2025.

13 Jack Clark, “Import AI 405: What if the Timelines Are Correct?,” Import AI, March 24, 2025.


Sorry, PDF downloads are available
to subscribers only.

Subscribe

Already subscribed?
Sign In With Your AAJ Account | Sign In with Blink