Skip to content

Back to the Fifties: Reassessing Technological and Political Progress

As in the 1950s, today’s surface optimism for technological miracles thinly disguises oceans of social and psychological alienation. That our own era is filled with silent suffering and dysfunction has been well-documented in a series of studies and publications beginning over a decade ago. Already in 2011, MIT researcher Sherry Turkle published the book-length lament Alone Together: Why We Expect More from Technology and Less From Each Other. It’s no stretch to group such works with 1950s books like The Lonely Crowd, as the internet changed the medium but the underlying consumer-driven dynamic—chasing a burgeoning material culture toward the mirage of a techno-utopia—seems a common link.

These gaps between expectation and reality are really failures to plumb the notion of progress itself. There’s the progress we promote and even believe, and the real world as it unfolds, progressing or otherwise.

Progress: Linear and Exponential?

The belief that we’re headed somewhere better—making progress—is remarkably durable. The myth of linear “onward and upward” progress grew out of the Enlightenment. Auguste Comte systematized it in the nineteenth century. Comte seems to have imagined progress (to use an anachronistic metaphor) proceeding like an escalator or elevator, rising through barbarism and ignorance to utopia—a succession through stages, from religion to philosophy to science, to a finally mature “technoscience.” Comte’s vision lies behind the inveterate optimism of techno-utopians today, who are confident that science and technology are the essential tools for making a better world. But whereas previous visionaries invoked scientific discovery and core technological invention as a panacea, today’s techno-culture focuses almost exclusively on im­provements to an old discovery, the digital computer.

The computational metaphor—a totalistic view of humans as com­puters—pervades Silicon Valley, which treats words like “tech­nology” and “reason” as synonyms for digital computing. Vaclav Smil, who in 2010 was selected by Foreign Policy as one of the “Top 100 Global Thinkers,” provides an alternate vision of the future in his latest book, Invention and Innovation: A Brief History of Hype and Failure. His book, out this year from MIT Press, brings needed attention to more fundamental and pressing problems than the reductio of the computational metaphor.

The list is long: feeding an increasingly undernourished world, reduc­ing the application of synthetic fertilizers, “safe, effective, and environmentally friendly refrigerants” (which still contribute to climate change), vastly better battery performance—multiplying current lithi­um‑ion bat­tery power by ten would still leave us with a quarter of the energy contained in a kilogram of kerosene—and “securing adequate supplies of food, water, energy, and material needed to lead healthy lives with decent life expectancies.” (We might also add, as Smil notes, better treatments and cures for dozens of categories of cancer.)

Our tunnel vision, he argues, imperils awareness and scrutiny (and funding) in these other areas. It has also distorted modern notions of progress, which many technophiles today believe to be exponential. Exponential progress means we’re going “onward and upward” not like an escalator but like a rocket, accelerating off a launching pad toward some fantastic destination in space. Exponential progress would likely seem dubious and Panglossian even to Enlightenment optimists like Voltaire. It’s a distinctively modern idea.

Failure to Launch

It’s also not true. As Smil and many others have noted, the American media routinely herald as “breakthroughs” even minor developments in applied disciplines like AI and other high-tech gadgetry. The constant drumbeat of world-changing developments no doubt buttresses claims about rocket-fast progress. But there’s ample evidence today that digital technology itself is no longer exponentially progressing—even though it is the one area where the exponential view is most likely to hold true. As with many other distortions and fallacies popular today, the line be­tween fact and fiction continues to blur.

The case for techno-scientific progress typically hinges on very real advances in microchip performance. The eponymous Moore’s Law, attributed to former Intel chief Gordon E. Moore, states that the number of transistors on an integrated circuit (the fastest computer chip then) roughly doubles every year, while the cost is cut in half. Moore’s Law has held steady for decades since Moore first voiced it in 1965; today’s computers are roughly 1.75 billion times more powerful than the computer that guided the Apollo capsule to the moon. Unfortunately, Moore’s Law is an exception to progress which in other areas remains stubborn and stalled. It’s also coming to an end.

As Smil notes, the final total of the single-processor transistor count on computer chips ended up higher than predicted by Moore’s Law between 1993 and 2013. But it slowed from 2008 to 2018, with a beginning best count of 1.9 billion and an ending count of 23.6 billion. If Moore’s Law were a law, the 2018 count should have been 60 billion. Between 2015 and 2018, the growth rate was just 4 percent. As Smil puts it, “the end is clearly in sight.”

Beyond computer chip and storage performance increases, the case for exponential progress gets exceedingly murky. While the tech media has taken absurdly bold and optimistic stances on even nominal advanc­es in anything digital, the actual inventions and innovations of the twenty-first century are more sobering. Many of the scenarios bandied about in the filter bubble of high-tech haven’t happened and often seem implausible and unwanted anyway.

AI is perpetually hyped, but “Big Data AI,” which has dominated the field in the twenty-first century and especially during the last decade, is based on extending the decades-old neural network algorithm. So-called deep learning, or convolutional neural networks, wasn’t notable until about 2012, when Geoffrey Hinton and his graduate students at the University of Toronto used it to beat all comers at the annual ImageNet contests, where thousands of categories of high resolution photos are labelled by AI systems (like caterpillars, Shetland ponies, people, trees, cars, and the like—the competition selected a thousand categories out of a total of twenty-two thousand, culled from Flickr).

There have been legitimate innovations in deep learning since 2012, including the publication of a paper on “dropout” (again from the Toronto group), which limits a bugbear of neural network training known as “overfitting,” and important work on the attention mechanism, in a 2017 paper published by Google Research and Google Brain scientists (along with, again, a researcher from Toronto). “Attention” makes possible the recent performance we’ve seen from conversational systems like ChatGPT, and it’s responsible for the underlying performance of the large language models used in text-generation systems, like GPT-3 (now GPT-3.5, soon to be GPT-4), developed by the hyper-funded research and deployment company OpenAI.

But that’s about it for the last decade. Nothing like AI possessing basic common sense or understanding is on the horizon. Depressingly, the “breakthroughs” are applied, not fundamental. The latest AI systems all rely on massive volumes of centralized data (big data) and huge quantities of energy-sucking GPUs for their training and operation. Nothing even approaching fundamental research has touched AI in decades; mostly, computers have just gotten faster, and the web has made available big datasets for them to crunch.

Large language models (LLMs) like GPT are, again, a case in point. LLMs use an architecture called a “transformer” to enable long-range lookback on a sequence or words or tokens known as a prompt. Trans­formers use the before-mentioned attention mechanism to, in effect, “see” prior words in a sequence and interrogate them for infor­mation relevant to generating the next word. The transformer architecture of LLMs like GPT enable conversational systems like ChatGPT to find reasonable completions of a user’s prompt. The system simply adds the next best word to complete the user-supplied sequence. It then feeds the new prompt (with one more word appended to it) through the feed-forward trillion parameter neural network trained on billions of words culled from the web and from digitized books (there are currently about five million books digitized). Such generative models work surprisingly well—witness the ballyhoo around ChatGPT and its uncanny ability to generate plausible English text—but rely on a kind of brute force computation made possible by piggybacking on human-supplied text on the web. It takes a full ten thousand GPUs, or specialized CPUs, to squeeze the impressive performance out of neural-network-trained sys­tems like ChatGPT. Is this an innovation? Yes. Is it the dream of AI come true? Hardly.

Moreover, AI, with the current focus on LLMs, is still siloed in the virtual world. To see this, turn to another perennial obsession: self-driving cars. Curiously, the hype surrounding self-driving cars typical of the mid-2010s has all but died off. It’s not simply that LLMs have stolen the spotlight (though that is partly true). It’s that self-driving AI doesn’t work in real-world conditions. The differences between web-centered AI like ChatGPT and (shall we say) nature-centered AI like self-driving cars are informative. All machine learning approaches—of which deep neural networks are an example—rely on a type of inference known as induction. Induction takes examples or prior observations and produces a rule: if I see a thousand white swans I generalize the rule “all swans are white.” Induction generally works better with more examples, which is why models like GPT rely on such large volumes of text. But induction by itself is manifestly incapable of reproducing real intelligence—when the future does not resemble the past in some patterned way, induction fails. Predictions about stock markets fail. Importantly, cars crash.

The limitations of induction are not seen (or rarely seen) with LLMs, in large part because such models incorporate so much of the textual content of the web that they achieve a kind of “closed world,” where (to continue the example) all the swans can be counted. Out in the wild, on a road, the future will sometimes break from past experience. Scientists working on self-driving cars call these breaks “edge cases,” and they occur whenever the training of the neural network “brain” in the self-driving vehicle can’t account for an anomaly on the road. Weather con­ditions like rain or snow, debris on the road, other objects or vehicles in previously unseen positions, and partly occluded or damaged road signs are all possible edge cases. They are all possible failure points. Induction seems adequate for the closed world of the web. It seems woefully inadequate for deploying self-driving cars or, for that matter, setting Rosie the Robot out in the world for grocery shopping or to pick up the kids. Further difficulties like the so-called ramification problem, involv­ing chained cause-and-effect events—each new event ramifying into unpredictable outcomes—exacerbate the already onerous requirements for AI systems operating in dynamic environments like busy roads, or the world writ large.

In other words, while LLMs are an exciting and largely unpredicted result for natural language processing, they’ve in effect siloed AI re­search and development into places where induction can be made to work, like the proverbial drunk looking for his keys under the lamppost. The same approach won’t work for autonomous navigation or dynamic applications of robotics. And given the amount of time and attention spent on LLM research and deployment, it’s unlikely that AI is on the brink of new and necessary breakthroughs into noninductive types of systems and inference.

Meanwhile, much of the architecture of the modern world, like engines, turbines, and electric motors, as well as materials like steel, high‑yield fertilizers, aluminum, lighter plastics, and titanium, was discovered or invented long before the twenty-first century—in many cases long before the invention of the computer itself. (The single decade of the 1880s, as Smil notes, accounts for more fundamental discoveries, inventions, innovations, and successful deployments than any other decade before or after it. The list is so long and so obviously important that tweaks to neural networks—which in one form or another have been around since the 1950s—seem risible and quaint.) Where are the emperor’s clothes?

The decidedly non-exponential record of technological and material progress in the twenty-first century suggests we’re stuck somewhere back in time rather than hurdling towards a techno-utopia. In many ways, we’re back in the 1950s.

Tech Will Set Us Free—or Not

The computer was developed in the 1940s by John Von Neumann, John Mauchly, J. Presper Eckert, and other scientists, some of whom had played key roles in the Manhattan Project. The eniac and then the edvac were used initially for military purposes, for instance to calculate blast ratios for atomic bombs. In the next decade, their role expanded to include increasing the centralizing power and efficiency of large cor­porations. Mauchly and Eckert developed the univac, designed as a data-processing machine suitable for corporate and nonmilitary use. The U.S. Census Bureau first contracted it for census counting; it was later used to calculate projections for election results. Its data-processing capability spawned the IBM 701—IBM’s “new brain.” The IBM 700 series ushered in the age of “Big Iron”—large, centralized, mass-pro­duced computing power.

In the 1950s, a young Stewart Brand exited military service determined to find an antidote to centralized organizations, which he thought stifled progress toward human potential and hoarded technology for other purposes. His foray into a 1940s theory called cybernetics, developed by MIT mathematician Norbert Wiener, led, by degrees, to the back-to-the-land movement in the 1960s and inspired the LSD-friendly Merry Pranksters, depicted in Tom Wolfe’s The Electric Acid Kool-Aid Test. Stewart Brand was the counterculture version of the “Renaissance man”—the philosopher, visionary, salesman, and aficionado who first grasped the potential of computers as tools of personal liberation rather than bomb- or profit-making. He launched the Whole Earth Catalog, offering “Access to Tools and Information.” The Cata­log won the National Book Award in 1972 and became a go-to resource for back-to-the-landers, who co-opted the academic-military-industrial complex’s electronic technology for personal use. The military had developed computers and other communications technology and then passed them off to the likes of IBM and GM. Why shouldn’t ordinary people have access to this, too?

In the 1970s, the ’60s counterculture abandoned its communes in search of jobs back in the city, but they took their gadgets and knowledge with them. The now famous 1970s “hacker” culture fol­lowed, which profoundly influenced some of the brightest early com­puter engineers and designers, like Steve Jobs. Think tanks, too, began adopting open cooperative strategies to boost creative ideas. The Stan­ford Research Institute and Xerox PARC became hotbeds of digital innovation.

By the 1980s, Brand and his colleagues had helped create the world’s first social network, the Whole Earth ’Lectronic Link, or WELL. WELL brought like-minded people together, which later gave rise to publications like Wired. Wired in turn promoted the can-do spirit of the early visionaries, and offered a vision of the twenty-first century that would carry forward the cooperative innovation Brand and others foresaw.

This vision of the internet was articulated in the argot of personal liberation. The coming networked world represented an alternative to, and an escape from, big business and its marriage to government. This vision aimed not only at entertainment but core interactions in business, education, and communication. It was an entirely new model—a system of bottom-up, decentralized networks—a cooperative knowledge socie­ty.

The highpoint of this thinking, that the web and its many applications could save the world, arrived in the late 1990s. The cover page of the June 1997 issue of Wired announced “The Long BOOM,” adding, “We’re facing 25 years of prosperity, freedom, and a better environment for the whole world. You got a problem with that?”

But the twenty-first century has, so far, followed a radically different trajectory. Social dysfunction and instability are also products of the “boom.” The gap between the mega-wealthy and everyone else has widened, imperiling the middle class. Web tech­nology and uses for arti­ficial intelligence has proliferated, but productivity has flatlined. We also waged two messy wars and lived through two market crashes—the 2000 tech bubble and great financial crisis of 2008—while also watching the web transmogrify into a cauldron of misinformation and balderdash. Do we have a problem with that?

Even a cursory walk through the twenty-first century reveals a trajectory of innovation that loops back to the very top-down centralization which had repelled Brand and other visionaries decades ago. With digital computers and computer networks, the twenty-first century’s new military-industrial alliance has returned to centralized Big Data and Big Iron, and like a magic trick we’ve found ourselves back in a decade we’ve mostly forgotten and assumed we’d left behind.

The IBM 701 cost $750,000—in 1950s dollars—making it unaffordable to nearly everyone but big business and the very rich. Are we past that? No. Though even homeless refugees own their own smartphones today, prohibitively expensive, centralized “Big Iron” computing now marks the current age. Microsoft has invested $10 billion in OpenAI, which makes large language models like GPT and conversational AI like ChatGPT. Much of the cash pays for the gargantuan computing power needed to train increasingly complex models and crunch massive vol­umes of training data. At the turn of the century, progress on AI was thought to require diverse efforts from myriad individuals, groups, and universities. It’s now all big business. Server farms and the “cloud” are the new zealously guarded mainframes of the old order. We can all be consumers of modern tech, but we can’t all contribute to the making of it. That’s in the hands of the centralized, wealthy elite.

Standing on the Shoulders of Giants

The 1950s boasts an impressive record of innovations, but earlier decades worked out the math for underlying advances in physics, like relativity and quantum mechanics. The decade saw hundreds of thou­sands of young soldiers returning from battle overseas, which drove the direction of change (the interstate highway system, suburbs from mass-produced housing, automobile improvements like the radial tire and air conditioning, fast food, motels for newly mobile families, and the modern supermarket all come to mind). It boasts a laundry list of entre­preneurship and innovation, but like our century, much of it is applications and extensions of earlier discoveries. Together, the entrepreneurial products of the 1950s feel a lot like Instagram, or other social media. Not discoveries, but clever ideas for a changing culture.

Yet even with this in mind, the ’50s seem more deserving of plaudits about exponential progress than the twenty-first century. In 1953, James Watson and Francis Crick discovered the double-helix structure of DNA. The fundamental importance of this discovery has not been matched in the twenty-first century—the gene editing technology known as crispr in this century is a downstream—far downstream—result of Watson and Crick’s watershed discovery. The invention of the laser in 1958 was also a breakthrough invention. In fact, the entire landscape of innovation and invention in the 1950s, high-tech or not, fits more closely the exponential-change talk popular today.

In computing, the ’50s—apart from the already mentioned widespread adoption of “Big Iron” computers—produced the first magnetic computer disk, the computer language Fortran, and the computer modem. Integrated circuits replaced vacuum tubes in 1958. (The micro­chip—perhaps the most important invention since the computer itself—didn’t arrive until 1971.)

The “space race” was largely a 1960s phenomenon, but it began in the 1950s. The Soviet Union’s successful launch of Sputnik in 1957 stoked Cold War fears and led to the founding of NASA in 1958.

Arguably more so than the current century, the 1950s achieved impressive technological, social, and cultural gains. Yet also similar to our era, reactions to even modest inventions and innovations were typi­cally ebullient, touting coming miracles like flying cars and saucers, super-intelligent AI (the field of AI itself kicked off in 1956 at the now-famous Dartmouth Conference), cures for hunger and the common cold, and perfect lives of leisure. The Jetsons aired in 1962 but was an obvious product of the ’50s space race.

Our century has been preoccupied with apps for cell phones and various (and often dubious) innovations on the web, like cryptocurrency or the “like” button. It’s true that entrepreneurs in the 2000s found the secret sauce for web commercialization, in what venture capitalist Marc Andreessen has called “the read/write web.” But this formula, dubbed “web 2.0”—ushering in blogs, posts, comments, emoticons, likes, and tweets—has been recycled again and again ever since. Web enthusiasts have hastily declared developments like cryptocurrency “web 3.0.” But no one has found a successful formula yet. We keep tweeting.

The Limits of Make-Believe Progress

Silicon Valley has successfully promoted a view of history with fever-pitch rhetoric about unbounded technoscientific advances, resulting in future utopias or apocalyptic killer robots. Lopsided focus on computer technologies like artificial intelligence, by its very nature, spawns a sci-fi type of thinking called futurism.

In his 2012 book Future Babble, Daniel Gardner documented and confirmed what common sense has long concluded, that future—and especially “expert”—prediction is a crapshoot. Still, stripes of futurism have become commonplace in the twenty-first century, though the obsession is much older. Techno-utopianism can be traced back to Karl Marx, and has roots in Enlightenment thinkers like Turgot, a minister to Louis XVI, and the philosophe Marquis de Condorcet, who wrote inexplicably ebullient missives about universal human progress while imprisoned during the Terror of the French Revolution. The idea that technology might run amok is found in the work of H. G. Wells and in that of Czech playwright and novelist Karel Čapek, who introduced the word “robot” in his 1920 play R.U.R. (Rossum’s Universal Robots). The English writer E. M. Forster also wrote an early account of technological tragedy, his 1909 novella The Machine Stops.

Whether Armageddon or Utopia, techno-futurism sees history as nothing more than the ever-increasing power of technology. Like all reductive totalistic views, it limits broader discourse and has consequences for affluent societies which may seem harmless but do real harm. Present-day developments get transmuted into highly speculative futuristic claims. Public discussion about technology, the environment, and other issues—which might wisely focus on bringing about needed improvements—becomes a lectern for speculation about implausible or irrel­evant futuristic outcomes. When things go wrong, as they do, techno-utopians often ignore, deny, or rationalize. Armageddonists ramp up implausible claims about super-smart robots taking over. The mindset survives financial collapse and pandemics. It’s impervious to real wars, too.

Old Is New Again

The Cold War between the United States and the Soviet Union has revived thanks to a former KGB spy, Vladimir Putin, now waging war on Ukraine. The now more-than-year-old conflict has claimed thousands of lives. America and NATO have been pulled into its orbit. Amer­icans at home see Soviet-era fears and anxieties returning. Supply lines are breaking. Suspicions are rising.

In February this year, Putin made clear he was mobilizing Russia for a war that could last a generation—hot or cold. Whether his strategy is countering NATO expansion to the east or empire building, it’s clear he views the war he initiated in Ukraine as a battle between East and West. Analysts think he’s betting that America and its allies lack resolve and are in decline. This helps explain his willingness to wage war for the long run, and it also signals what the Economist in its February issue calls “a drift towards a new cold war.”

Ukraine is holding off Russian advances with a constant supply of aid and weapons from America and western Europe, mostly the basic accoutrements of battle—mortars and ammunition. The United States sends 155 millimeter howitzer shells and other munitions to Ukraine, and munitions manufacturers are now straining to meet an ever-increasing demand, as the intensity of fighting quickly exhausts supplies. The Department of Defense has begun fast-tracking defense contractors to step up production. This means that despite ongoing pain in our economy, the conflict overseas has put America and its allies back on a war footing. The old Cold War alliance of business contractors and the military is rapidly solidifying again. The industrial-military complex is back.

The Red Scare is back, too. China is probably doing what China has always done best—make diplomatic noises while subverting Western interests. China is fond of playing the part of Iago in the West’s Othello. But direct military action by the People’s Republic of China is also possible, and its designs on Taiwan are clear enough. We’re back to worrying about escalations of conflict with Russia and China, which since the 1950s have also been inextricably tied to thermonuclear threats. Is this forward momentum? Is it progress?

Wars—even cold wars—are a kind of correction to futurism, in large part because their horror forces an assessment of the world as it is—as it’s unfolding. Wars present undeniable evidence that the future is up for grabs—as it always has been—a fact that rankles futurists and afflicts affluent, consumer-driven society itself with largely indigestible chunks of reality.

Tolstoy’s insistence in War and Peace that the quotidian decisions and actions of humans and their leaders form the future and ensure its unpredictability captures the folly of futurism as much as the sanguine immersion in perpetual distraction characteristic of our consumer-driven entertainment culture. Bewilderment, frustration, and denial are common responses. The catastrophe unfolding in Ukraine is not occur­ring—not physically happening—on Twitter or other social media. It fits no techno-futurist or consumer formula. It’s not a fictitious storyline on Netflix. It can’t be “canceled.” What do we do? What should we believe? How do we respond?

The Iceberg Psyche

In the early 1950s, sociologist David Riesman argued in his best-selling The Lonely Crowd that the “social character” of Americans had shifted after the Second World War. Riesman called earlier American character “tradition-directed” and, later, “inner-directed.” Folks hewed to tradi­tion or followed internal “gyroscopes” to navigate through life. By 1950, argued Riesman, the social character of Americans had become “other-directed.” Other-directed character swaps traditions and inner-direction for peer groups and the public. We use an outward-searching “radar” (these are metaphors) to determine values and gain approval. Sociologists and other academics have largely agreed with Riesman; his argument links us not only to the 1950s but to every other decade after it. Other-directedness seems to be a permanent feature of modern life.

Another sociologist, Stjepan Meštrović at Texas A&M, argued in his Postemotional Society that not just Riesman’s social character but our very emotions are increasingly manipulated and packaged for others, like products. He wrote in the 1990s, when TV still dominated media and advertising. Today, we have a more comprehensive menu of diver­sions, which no doubt amplifies Meštrović’s point. Our new metaphor resembles an iceberg. We still feel real emotions but, more than ever, they are hidden from view. We package simulated emotions for public consumption instead. The iceberg metaphor also captures the 1950s, when a rapidly spreading entertainment culture helped deflect the dark shadow cast by Stalin and the Soviet Union. Technological progress is one thing. But the return of Cold War fears and anxieties amid the false cheer of technological progress another. A comprehensive look at our world today suggests our ideas about progress don’t fit reality—in what we do, and how we feel. It suggests at a deep level that our theory of history itself is flawed.

Cycles and Spirals

Not everyone living in the Enlightenment era believed in onward-and-upward progress. In the eighteenth century, Giambattista Vico, a little-known professor of rhetoric in Italy, believed that societies and nations progress and then inevitably regress, tracing circles or spirals in history and ending up, after reaching a highpoint of affluence and suc­cess, repeating mistakes from bygone eras long forgotten. Nations (and empires) run their course this way, Vico argued, resulting in a return to places our ancestors had already been. In his masterpiece, The New Science, Vico took up examples common to the historical accounts of his time—the kingdoms of ancient Egypt and later the Roman Empire.

The New Science is effectively impenetrable to all but determined Vico scholars. (The writer James Joyce once remarked that he didn’t understand The New Science but read it anyway. There are references to it in Ulysses—a book that might also fit Joyce’s remark.) Still, Vico’s main arguments are unmistakable, and unsettling because he insisted that history is inevitably cyclical, and the return always follows on the heels of the apex. Vico argued that advanced nations lose the poetic, passionate language characteristic of earlier times, and begin using legal­istic and rational language, first to construct institutions for a common good, and later to turn on each other. The seeds of “the return” are planted in the very progress of the body politic.

After the ricorso—the return—commonwealths might have degraded into bureaucratic monarchies. The use of language then shifts as well. Rational-sounding discourse and “passions” leading to vice inevitably corrupt “manners” (not a bad gloss of social media today). Previous progress then becomes self-defeating.

In other words, Vico’s ricorso is, in a sense, a historical catch-22. It emerges because societies and nations must adopt legal and rational institutions and communication to survive and to prosper. But it’s this very framework that signals the eventual demise. The poetic imaginative core of humans, now denuded, can’t support all the legalese and sup­posed rationality. (A modern way of putting this is that societies become less innovative and more self-centered and querulous, which leads to the infighting and regression.) The dynamic is difficult to see in real-time, in part because the nation has adopted an entire way of life that now constitutes its zeitgeist—the way everyone thinks and acts. Vico uses this framework to explain how the Egyptian kingdoms vanished into the desert, and how Rome fell to marauding Visigoths—which then led to the Dark Ages and to new beginnings in the Renaissance and the Enlightenment.

In some sense, the idea of cyclical progress isn’t particularly earth-shattering. Chinese thinkers like Sima Qian of the Han dynasty as well as Ibn Khaldun, an Arab scholar living in the Middle Ages—or for that matter Polybius, who lived in ancient Greece—have also advanced cyclical theories of progress. So has the poet W. B. Yeats, who distilled it in “The Second Coming.” It captures what we already know, that civilizations do in fact decline or self-destruct—a scenario popularized in recent times, for instance, in Jared Diamond’s 2005 book Collapse: How Societies Choose to Fail or Succeed. But whereas Diamond’s treatment of decline made room for free will and personal choice, Vico insisted that decline will happen, in some way, regardless of our choices. It’s built into the levers of human progress.

Unlike other proponents of cycles in history, moreover, Vico adum­brates a historical spiral, where each “loop” back after the pinnacle starts at a slightly higher or better spot. As in our personal lives, perfection or utopia is never reached, but long-term (very long-term) progress still results.

Nevertheless, it seems Vico would find a cold and unreceptive audi­ence today. The computational metaphor excludes cyclical views of his­tory for roughly the same reason Moore’s Law promised ever-increasing computing power. The popular historian Yuval Harari, for instance, touts a computational theory called “dataism,” where everything in the universe, including people, is just data. This brand of futurism tethers our future to the very problem critics like Smil have exposed—the endless myopia of techno-optimists. If everything is “data,” then we are merely inputs to something else. It’s difficult to find a Renaissance moment in such reductionist views.

Vico saw the degradation of constructive cooperation as the first sign of the return. If that’s true, it’s a telling indictment of our own increasingly fractured and polarized world. Regardless, our predicament is not new, though the details have changed. We’ve survived wars, financial crashes, pandemics, and so far, the web and AI—our latest obsessions. Yet global and national instability rises, and we remain burdened again with problems we’ve already worked out, and troubles we thought we’d left behind.

Are we really “back in the ’50s”? Maybe. Or maybe we’re headed for something worse.

This article originally appeared in American Affairs Volume VII, Number 3 (Fall 2023): 45–58.

Sorry, PDF downloads are available
to subscribers only.

Subscribe

Already subscribed?
Sign In With Your AAJ Account | Sign In with Blink