The Rise and Fall of American Growth:
The U.S. Standard of Living since the Civil War
by Robert J. Gordon
Princeton University Press, 2016, 762 pages, $40
Prior to the 2008 financial crisis, it was widely taken for granted that the U.S. economy was exceptional, at least in the developed world: it was the only one of the major developed economies that could grow. The Japanese economy had stagnated in the 1990s. Europe was on the road to similar stagnation on account of its pensioner-heavy demographic profile and an overregulated business sector. China, though growing rapidly, was seen as playing economic catch-up with the developed world, and its ultimate potential remained limited by its political system and demographics, at least according to one common argument. The United States—with its relatively high economic growth rate, increasing population, seemingly strong labor market, dynamic capital markets, innovative technology sector, and commitment to free market principles—was supposed to be different.
The ensuing decade has not been kind to the thesis of American economic exceptionalism. The average rate of U.S. GDP growth since the crisis has been among the most anemic of any decade on record, barely beating out the recovery following the Great Depression. Scaled to account for demographic changes, the U.S. labor market still has not fully recovered from the 2008–9 recession nearly eight years later. The percentage of employed prime-age individuals is lower at present than at any time since the 1980s, when female labor force participation was comparatively low. The demographic profile of the United States has also worsened, and the birthrate has declined to below replacement level. It may be the case that the economies of other large, rich states generally also look worse than they did in 2007: Europe has only recently emerged from its financial crisis; the Japanese economy remains stagnant, and even China’s economy has slowed. But the United States was thought to be fundamentally different from the rest of the developed world. What went wrong?
Poor U.S. economic performance over the past decade raises an even larger set of questions: Was the U.S. economy particularly strong during the decade before the financial crisis? Or was its faster-than-normal growth rate illusory in the first place? Has the U.S. economy meaningfully slowed down? Or was it growing slowly to begin with during the 2000s? In other words, is its weakness new? Or was it simply hidden before?
The Rise and Fall of American Growth
Robert Gordon’s The Rise and Fall of American Growth addresses these questions by placing America’s economic performance over the past two decades into the wider context of the economic history of the United States since 1870. Gordon’s main argument is that the technological inventions of the past fifty years have not sufficed to propel economic growth in the same way that the inventions did from the post–Civil War era through the 1950s. Gordon maintains that technology is what economists call an exogenous factor that can accelerate economic growth; if not for technology, the economy would only grow as the labor force increases in size and as the stock of capital increases. But this exogenous factor—the magic of technology—has not been performing of late, according to Gordon, resulting in lower realized GDP growth and only modest gains in productivity.
Gordon makes this argument through a detailed elaboration of the history of American invention in the twentieth century. In particular, he distinguishes the economic ramifications of the information technology revolution of the 1990s from the industrial revolutions of the late nineteenth and mid-twentieth century. If Gordon had merely provided a history of these industrial revolutions, their technologies, and the impact of these technologies on economic growth and productivity, he would have done enough.
The Rise and Fall of American Growth, however, also provokes its readers to pose deeper questions: Why has the information technology revolution been so much less impactful than the two previous major industrial revolutions? Why were the inventions of prior years so impactful? Are the reasons merely technological? Or is there something more? Is there, to use the term that economists use, another exogenous factor beyond technology that might have an effect on economic growth—or even technological developments themselves?
The Third Industrial Revolution Worse Than the Second
The bulk of Rise and Fall provides a history of the inventions of the nineteenth and twentieth centuries and evaluates their impact on American productivity. It is important here to distinguish between economic growth, which simply measures the growth in a country’s economic consumption or output, and growth in productivity, which is defined as output per hour worked. Productivity is the real target of Gordon’s study, as it is productivity that results in rising per capita wealth and resources—the basic aim of most economic policies. Indeed, the subtitle of the book is The U.S. Standard of Living since the Civil War.
Gordon’s study is organized chronologically into three distinct periods. The first covers the initial portion of the “second industrial revolution,” from 1870 to 1940, when life in the great cities became, more or less, what we now know it to be. This period is defined by the invention of the internal combustion engine, the radio and telephone, electric light, power, and tools, and so on. Productivity growth, though high, was within the realm of what had been seen previously, according to Gordon. We return to this topic below.
The second period covers the innovations of the 1930s and 1940s (the Depression and World War II), which were introduced into American manufacturing during the war and into civilian life during the 1950s and 1960s, leading to what might be described as an age of abundance. Gordon calls this period one of “evolution”—as opposed to the “revolution” of the second industrial revolution—because many of the technologies that defined it evolved from earlier inventions (electronics from electricity, better planes, better cars, etc.). This period was also a “golden age” of productivity, according to Gordon, with productivity data showing a quantum leap forward unlike anything seen before.
The third period covers what Gordon calls “the post-1970 growth slowdown,” and here Gordon focuses his attention on the question of why the innovations of the information technology revolution did not, in his telling, bring the same gains as did the innovations of the pre–World War II period or the evolutionary developments of the postwar era.
Gordon’s argument can be summarized simply by listing some of the innovations that he analyzes in order to make his case:
1870–1940: The internal combustion engine, automobiles and early airplanes, contemporary street pavement, electricity, electric light, and electric power, indoor plumbing and modern sewer systems, central heating via furnace, urban mass transit, telephones, radio and film, processed food, refrigeration, store-bought clothing, department stores and catalogues, urban bungalow housing, reductions in infant mortality, suppression of diseases through better understanding of epidemics, the forty-hour work week, new forms of insurance, and large banks providing mortgage and consumer credit.
1940–70: Suburbs, shopping malls, widespread air conditioning, reliable household appliances to replace housework, televisions, interstate highways, mass affordable air travel, complex electronics, LP records, ubiquitous home telephones, TV news, mainframe computers, antibiotics, widespread vaccination, enhanced birth control measures, mass college education, widespread use of plastics.
1970 and beyond: “McMansions,” apparel retailing in big box supermarkets, widespread central air conditioning, microwave ovens, cable television, personal stereos, cassettes, CDs, MP3s, cell phones, smart phones, handheld calculators, spreadsheets, personal computers, the Internet, changes in news delivery, enhancements in medical imaging and in therapies for cancer and cardiovascular disease.
Surveying these lists, Gordon concludes that “throughout the different dimensions of the growth experience, the decades since 1940 do not exhibit the same uniformity of revolutionary change as occurred between 1870 and 1940. Instead, the year 1970 marks a distinct break point between faster and slower growth.”
Gordon’s underlying insight is that transformations in how we live at home, power our tools, move ourselves and our goods, maintain our health, and sustain ourselves—in other words, physical developments—are largely more important than enhancements to the technologies used to occupy our minds in leisure or to lend order to the chaos of business and personal life. To quantify this thesis, Gordon estimates that output per hour increased at a rate of 1.5% annually from 1890 to 1920, 2.82% annually from 1920 to 1970, and 1.62% annually from 1970 onwards. In other words, productivity growth was approximately 50% higher from 1920 to 1970 than it was before or after.
It is necessary to point out the implicit methodological limits of using productivity data to draw conclusions about the history of innovation. After all, how can one fairly compare economic data across such long periods? Is it methodologically consistent to derive a metric like total factor productivity from data on hours worked, capital per worker, educational attainment, and output estimates from the late nineteenth century? How reliable were records of total hours worked in, say, 1895, let alone data on capital per worker, which is still only roughly estimated even today? And how reasonable is it to compare these numbers from the 1920s to data from the 2000s?
Gordon himself shows a healthy skepticism toward comparing historical economic indicators with data of more recent vintage. He is quick to admit the limits of using conventional economic data across long historical periods and the inability of numbers to account for qualitative step changes, such as the merits of electric lights over tallow candles or automobiles over horses.
These general limitations of measurement point to some of the difficulties implicit in the thesis that the period from 1920 to 1970 (and particularly from 1940 to 1970) truly represented a golden age of productivity. Perhaps it only represented a golden era for productivity measured by the standards of the 1920s and 1930s, when most of the methods that are used to evaluate economic growth were devised. This is particularly true with respect to comparison of the middle of the twentieth century to the late nineteenth and early twentieth centuries. Along these lines, Gordon writes that productivity estimates for the early twentieth century might be understated by multiple factors.
It is true that the data may not be well suited to proving that the 1950s and 1960s were a golden age of productivity and economic growth compared to the 1870s. That said, the numbers are better suited to comparing the 1950s to the past fifty years because at least roughly similar methods of data compilation and analysis have been in place throughout this span. And the numbers here, as mentioned above, are not pretty. The U.S. economy may not be in a state of crisis at present, but unless one wishes to engage in extreme window-dressing of the statistics, it is impossible not to acknowledge that the rate of economic progress has slowed unusually.
The Secular Stagnation Thesis Misses the Point
Rise and Fall, and Gordon’s work more generally, distinguishes itself from other interpretations of the recent U.S. economic slowdown by looking beyond the financial crisis as a proximate cause of weak economic growth. The competing analyses have either: (a) compared U.S. economic performance in the wake of the financial crisis with the fallout from other financial crises, as Carmen Reinhardt and Kenneth Rogoff did in This Time Is Different (Princeton University Press, 2009), or (b) offered arguments likening the U.S. economic slow-down to the secular stagnation of Japan and Europe—and argued for demand-side stimulus measures to ease a difficult and ultimately intractable situation.
The limitations of the first argument, that recent weak economic growth is attributable to the debt hangover from the 2000s, should be clear from Gordon’s presentation. On the face of things, debt overhangs can indeed function as a structural headwind to economic growth, as Gordon writes. But the debt crisis of 2008–9 and its aftermath explain neither the U.S. economy’s weak performance prior to the housing bubble nor the underlying reasons why Americans decided to inflate a housing bubble of record proportions during the 2000s. When examined over a longer time horizon, aggregating the highs and the lows, as Gordon does, the housing bubble and the debt build-up around it seem like a symptom and not a cause of larger problems.
We now know that much of the GDP growth recorded from 2000 to 2008 was illusory—distorted by unsustainable investment in housing and the “wealth effect” experienced by U.S. consumers as their homes increased in value. Productivity growth also declined during that period. If somewhere between 5% and 20% of the economic growth during the 2001–2008 expansion was attributable to the housing bubble, then our current productivity malaise is merely a continuation of the 2000s—but now without an artificial boost from large year-on-year increases in home values and the attendant “wealth effect.” Thus, even after waiting out the process of post-crisis deleveraging, it is not clear that the U.S. economy should be expected to return to some higher level of productivity growth simply because enough time has passed and enough deleveraging has occurred.
The second hypothesis frequently put forward to explain our economic malaise is most forcefully advanced by Lawrence Summers. Summers has re-popularized the Depression-era term “secular stagnation” (coined in the 1930s by economist Alvin Hansen just as the United States was on the precipice of a period of dramatic secular dynamism). He argues that the U.S. economy has broadly stagnated in the wake of the 2009 financial crisis as a result of factors curtailing domestic demand. For Summers, this account serves as justification for expansionary fiscal policy funded by central bank currency creation, which Summers believes will lift economic growth as the government makes up for inadequate private sector demand.
But demand for what? A surge in government investment in, say, roads and airports might increase spending in the economy and thus measured GDP and possibly even employment. But it would not address, among other things, the shortage of engineers or the lack of major new developments in transportation technology. The airports might be cleaner and more spacious, the roads smoother, but the planes and cars would be the same.
Measures designed only to increase aggregate demand miss the point, at least in Gordon’s supply-centric framework. The U.S. economy has not so much stagnated as slowed, according to Gordon’s analysis. More precisely, it has stagnated only insofar as productivity growth has slowed. Although the rate at which productivity is growing may have decelerated, the cumulative amount of hours worked by Americans has continued to rise, and the capital stock continues to increase.
Thus, using Gordon’s framework, it is possible to draw an important distinction between Japanese and European stagnation and the American slowdown. In Japan and Europe, the economy and population as a whole have ceased to grow, but per capita income and per capita output have remained level or even risen. In America, by contrast, the population and economy have increased in size, but per capita growth has slowed. Japan and Europe are in the throes of demographic stagnation, a process much more deleterious to a country’s growth prospects than the kind of economic malaise in which the United States finds itself at the moment. The United States is seeing a demographics-led slowdown in its labor force as well, but it has not yet reached the kind of stasis seen in Japan or Europe, as recent declines in U.S. total fertility have been less sudden than the drop off in fertility in, for example, Japan or Italy.
Is Political Necessity the Mother of Inventions?
Perhaps the strongest argument in favor of the secular stagnation approach, or at least its policy prescription of large fiscal stimulus, is historical: when the United States last experienced a protracted period of slow economic growth, it was jolted out of it to some extent by the New Deal and by increased government spending to fight World War II. (The New Deal’s effects are hotly debated, e.g., in Amity Shlaes’s The Forgotten Man (HarperCollins, 2007).) Gordon’s book is mostly a history of inventions as they pertain to the American standard of living, but the history of invention is not always driven by conscious attempts to improve the standard of living. Many of the great inventions and scientific breakthroughs that ultimately transformed Americans’ standards of living were first created, rather, out of military necessity.
Whether jet airplanes and rockets, improved motor vehicles, advances in manufacturing technologies, the development of computing and telecommunications, and even the splitting of the atom, a great many of the most productivity-enhancing technologies were developed by the military during WWII and the Cold War. Gordon addresses this issue in a chapter called “The Great Leap Forward from the 1920s to the 1950s: What Set of Miracles Created It?” This chapter is the most controversial part of the book, much more so than any discussion of Gordon’s “techno-pessimism.” For Gordon concludes that “the Great Depression and World War II directly contributed to the great leap.” Specifically, government investment and direction of the economy in the 1940s were the main sources for U.S. innovation and economic growth during 1950s and 1960s according to Gordon. In support of this assertion, Gordon points to an astonishing fact sourced to his Ph.D. thesis from 1967: “The number of machine tools in the U.S. doubled from 1940 to 1945 and almost all of these new machine tools were paid for by the government rather than by private firms.”
Gordon’s argument here is essentially the same as that made by Reuven Brenner in his book Rivalry: In Business, Science, Among Nations (Cambridge University Press, 1990): namely, that relative failure or fear of being leapfrogged is the mother of invention and technological innovation. Gordon is to some extent applying Brenner’s thesis. The United States only rolled the dice on scientists at IBM for code breaking, physicists at Los Alamos for nuclear energy, the ambitions of Henry Ford for the production of airplanes, and so forth, because it feared total ruin in a world war.
This argument need not imply central command of the economy. As Arthur Herman argued in Freedom’s Forge (Random House, 2012), a book cited by Gordon, a distinguishing aspect of the Roosevelt administration’s war mobilization was its decentralization. Herman described the process, when functioning properly, as follows: “All you had to do was put in the orders, finance the plant expansion, then stand back and let things happen.” In short, Herman portrays an economy that is not purely planned but is also not purely responsive to market signals. It rather follows a corporatist structure in which defense contracts are bid on by firms and practical production decisions taken in a decentralized fashion, while the government sets the general direction and may even manipulate prices. Financial markets are highly regulated and savings are channeled into war bonds. This is not a Five-Year Plan. But it is also not a free market.
Gordon also seems to think that the immigration and trade restrictions implemented in the United States during the 1920s and the Depression period contributed to rapidly rising standards of living in the 1950s. His argument is that “the high tariff wall allowed American manufacturing to introduce all available innovations into U.S.-based factories without the outsourcing that has become common in the last several decades,” and “the lack of competition from immigrants and imports boosted the wages of workers at the bottom.”
These are extremely controversial claims, of course, which contradict the academic consensus in both history and economic theory. The Smoot-Hawley Act, for instance, which raised U.S. tariffs, is widely seen as having deepened the Depression. Human capital is generally thought of as a form of investment capital that bolsters growth. Gordon is surely aware of this view, but his arguments raise questions whose answers are typically taken for granted: How much less U.S. industry would have survived the 1930s without Smoot-Hawley? How much more would American wages have fallen in the 1930s if the less restrictive, pre-1924 immigration policy had been maintained? Or would more immigration have created additional demand? Would the Sun Belt have become heavily populated fifty years earlier with more immigrants to people its cities? Would more innovators have immigrated to the United States? What would have been the effect on American industry during World War II, and on the war effort, of a larger pool of laborers and soldiers?
These are all counterfactual questions, unanswerable by definition. The bottom line, however, is that government economic intervention before World War II and investment during World War II, in Gordon’s telling, extended the “big wave” of the second industrial revolution and unleashed the technological evolution of the 1950s and 1960s.
Moreover, these questions point to a fundamental truth that Gordon sometimes overlooks: it is impossible to think about economic policy without also thinking about politics. Is it a coincidence that the key dates that Gordon uses to break up his history of productivity are 1870, 1945, and 1970?
Political Economy: The Missing Factor
Enter political economy. Economists typically treat technology as an exogenous factor that amplifies capital and labor to enhance economic growth; in other words, they treat technology as a mystery. In the classical liberal framework, technological development is thought to come from economically incentivized individuals developing new technologies—rational actors innovating for profit. Academic paradigms tend to treat invention as altruistically produced by impassioned or inspired visionary researchers, creating for the sake of creativity. In Gordon’s treatment of the 1930s and more so the 1940s, however, we see something different: technological advancement is attributed to state-guided investment combined with a willingness to put technological developments into use to remedy adverse conditions.
While Gordon’s account might be at odds with the general agnosticism of economists concerning the causes of technological development, it does offer some considerable explanatory power. It would be difficult, for instance, to talk about the creation of the U.S. railroad and telegraph systems absent a consideration of the political imperative for their construction—settling and governing the American West—and Congress’s granting of lands and easements for construction. Was it the steam engine that created the American railroads? Was it the entrepreneurship of investors and railroad builders along with the sacrifices of workers? Would any of it have been possible without Congress, easements, and eminent domain? And was not the process of granting land to railroads one of the great political controversies of the nineteenth century?
Consider jurisdictions in which the government cannot direct or secure investment. Even basic innovations such as electricity and sanitation require a political context for their implementation, as demonstrated by the fact that the most dysfunctional political environments in the world still do not offer these more than century-old technologies to their citizens. If a city cannot muster the collective will to install a sewer system, does modern sanitation technology even exist for its residents? Surely technology is not the only factor. Imagining economic relationships outside of a political context, in a world of rational agents alone, is illogical; no such world exists.
History is riddled with examples of true scientific breakthroughs that do not cause material advances upon invention. David Landes’s The Wealth and Poverty of Nations (Norton, 1998) addresses the question of why the industrial revolution(s) happened in Europe as opposed to China. In Landes’s telling, China’s scholarly gentry class was superior to its European equivalent in terms of scientific knowledge and technological innovation through at least the fifteenth century. Landes argues that Chinese civilization had developed gunpowder, printing, cannons, and the compass before Europe—yet none of these were gainfully or fully commercialized. Landes attributes the slow pace of economic development in China during the European Renaissance and industrial revolution to China’s entrenched political organization that cultivated advanced learning but not commercialization. Landes asserts: “Improvement would have challenged comfortable orthodoxies and entailed insubordination.”
The reasoning behind this thesis suggests that in an established and hierarchical government, academic science and discovery may be tolerated, and recreational uses of technology, such as fireworks, may be accepted, but the implementation of grand scientific projects on a societal scale can be deemed too disruptive to the existing order.
For scientific innovation to have material consequences, by this account, those in possession of power in a society must be willing to change, perhaps drastically. Political direction is here not only a possible explanatory variable for innovation, but is also perhaps even more plausibly an explanation for the failure of scientific innovation to have material effects. The financial market can provide a check against the political impediment of progress, allowing investors to finance ideas that governments do not or will not. In the extreme counter-example of dictatorships, only ideas which contribute to state control and power are (deliberately) financed because the financial market is captive to the state and the government’s goals are supreme.
This idea brings us back to one of the most important questions of political economy: who wins, and by extension, who loses from technological change? Reuven Brenner has described what he calls betting on ideas, even against long odds, as a rational strategy for those who perceive a risk of being left behind—whether at the level of the individual, the firm, or the state. He contrasts a strategy of betting on ideas with a strategy of insurance purchasing, bureaucratization, and conservatism for an incumbent guarding an established position. This paradigm applies as much in politics as in business. Does an established state that thinks of itself as a “unipolar” power have reason to take technological bets and risks? How much risk and change do its citizens want? And are governments of such states less likely to be held accountable?
In surveying the economic slowdown of the 1970s and beyond, one wishes that Gordon had asked a few more second-order and third-order questions akin to those that he asked about the causes of growth in the 1940s–60s. What were the major political decisions of the period from the 1960s onward that caused first innovation and technological adoption, and then productivity and economic growth, to slow so markedly? In what way did the political context and national objectives change over the past fifty years? Is it simply, as Gordon has put it, that “the Great Inventions of 1860–1900 … created a fundamental transformation in the American standard of living…. In comparison, computers and the Internet fall short”?
Stasis and Its Alternatives:
The Fallacy of Dividing Techno-Optimists and Pessimists
Viewed through the prism of political economy, properly speaking, there should not be techno-optimists or techno-pessimists. Rather the debate should be between advocates of a status quo and risk-takers. It can make sense to be optimistic that a government will create a constructive backdrop for investment and be daring enough to take risks in policy in order to leapfrog competitors. It can also make sense to hope that a given government will play its political hand conservatively and do all that it can to mitigate risk for itself, its citizens, and even established companies or corporate interests. Either of these outcomes is within the realm of rational political decision making, though the choices made are likely to be contingent on a nation’s circumstances and its competitors. The meaning of technology will be defined by its political context and not only by its scientific content.
If this is true with respect to technological innovation, it is equally true with respect to questions that relate to economic growth. Take, for instance, the possible ramifications of driverless cars or transportation drones operating in an urban setting—perhaps the most frequently discussed recent technologically-enabled possibility, and one that Gordon treats at length. It is conceivable, as Gordon suggests, that the technology simply will not work, or that if it does, it will not be transformative. It is also entirely conceivable to imagine a future in which a guaranteed universal income cushions the lives of millions of workers displaced by driverless cars and drones and robots on shop floors. It is equally possible to imagine a world in which many millions of additional Americans are employed to maintain and organize a fleet of cars and drones ferrying ever more goods and people ever longer distances and undertaking ever more tasks. One can additionally imagine a future in which the cost of insuring a fleet of drones and driverless cars is too expensive to be practical, or even a world in which the testing of these devices brings too many legal risks to be practicable and progress remains extremely slow.
Instead of considering technologies alone, as though they are simply a function of primary research budgets at universities, we need to ask about whether our laws and political choices strike the right balance between stasis and change. This is not to say that technological progress is simply a matter of political choice, or to promote a false duality between innovation and its absence. As Gordon argues in his book, there are great inventions that represented fundamental improvements in material life. The inventions of any given generation will, for circumstantial reasons, not necessarily be as great as the inventions of other generations. But the improvements that Gordon documents both in productivity and standards of living do not always result from inventions themselves. The period whose progress Gordon most admires, after all, was not the second industrial revolution’s heyday in the early twentieth century, but rather, its evolution and morphing in the 1940s into the productivity boom of the 1950s and 1960s.
It has become commonplace for economists to point out that with administered interest rates near zero, extraordinary economic projects are affordable. I have heard it suggested that, with interest rates so low, it could be cost-effective to increase the landmass of coastal cities such as Boston and New York (both of which have been expanded this way in the past), or that it would be affordable to create massive hydroelectric plants on the scale of the Hoover Dam. These hypothetical undertakings may or may not be advisable, but surely they would address popular social demands: relief of urban real estate costs and the provision of clean power. They would do so using proven technologies. Large endeavours of this nature could potentially lead to the evolution of existing technologies and even innovation. But neither of these types of projects, undertaken in the past, seems plausible today. It is even harder to imagine the undertaking of more technologically ambitious city, state, or federal projects whose side effects would be totally unknown, and whose implementation could bring great risk.
Why has our willingness to act shifted so dramatically? What underlying preferences are embedded in a hypothetical preference for stasis? What underlying preferences are embedded in a hypothetical preference for change? If we choose change, we must ask what kind of change we want—and hold ourselves accountable to the accomplishment of change for the better. If we choose to be conservative in our economic policy, then we must ask what we are conserving.